<<

Appendix A Simulated Annealing

A.1 Introduction

Simulated annealing can be considered as an extension of local optimization meth- ods because: • This approach can be applied to criteria that are neither continuous nor con- tinuously differentiable. It is just necessary to be able to compute the value of the criterion for any feasible solution. Thus, the criterion may be given by any type of function or even by an that returns a numerical value starting from the values of the parameters that define a solution. • The variables can be of a qualitative nature, needing only to be able to derive from their “values” a quantitative value of the criterion. Indeed, simulated annealing easily applies to combinatorial optimization prob- lems. A problem P that belongs to the field of application of simulated annealing is expressed as any optimization problem:

Find *∈ Ss such that )*( = Opt sfsf )( ∈Ss where: • S is the set of feasible solutions. A feasible solution is one that satisfies all the constraints. We will see some examples in the next section. • f is a “function” in the broadest sense, as explained above. This function is the criterion. • Opt refers either to minimization or to maximization, depending on the type of problem. 450 A Simulated Annealing

A.2 Basic Requirements

In order to apply simulated annealing, we must be able to: • Compute the value of the criterion for any feasible solution. • Define an initial feasible solution. • Derive a neighboring feasible solution from any current solution. The criterion depends on the problem to be solved. An initial feasible solution may be difficult to find. A is often used to generate such a solution. Another possibility is to start with a solution that is not feasible and to penalize the criterion in order to shrug off this solution as soon as possible. A neighboring solution is usually obtained by slightly altering the solution at hand. Again, the way the current solution is altered depends on the type of prob- lem considered. Let us illustrate the basic requirements with the following four examples.

A2.1 Traveling Salesman Problem

A salesman has to visit shops located in n different towns. The objective is to find the shortest circuit passing once through each town. This circuit starts from the salesman’s office and ends in the same office that is located in the n + )1( -th town. For this problem, the criterion is the length of the circuit. Any circuit passing once through each town is a feasible solution. Note that the number of feasible so- lutions is equal to n! since n + 1 towns are concerned. Indeed, this problem is combinatorial. A neighboring solution of a given circuit is obtained by permuting 2 towns of the circuit, these towns being chosen at random among the n towns to visit.

A2.2 Balancing a Paced Assembly Line

We refer to the notations introduced in Chapter 7. We consider that the cycle time C is given as well as a partial order over the set of operations and the operation times. A feasible solution is the assignment of the operations among N stations such that the partial order is satisfied and the sum of the operation times in any station is less than or equal to C. A.2 Basic Requirements 451

The criterion could be (7.1) or (7.2) defined in Chapter 7, or any other criterion derived from the distribution of the operations among the stations. A way to obtain a neighboring solution has already been explained in Chapter 7. It consists either in permuting two operations located in two consecutive sta- tions or in moving one operation to the next (or the previous) station, assuming that the capacity constraints and the partial order are satisfied by this new solution.

A2.3 Layout Problem

This layout problem consists in arranging the resources (machines, equipment, etc.) in a shop in order to minimize a criterion that is often the sum of the products of the average flows between machines by the distance covered by these flows. A feasible solution should satisfy various constraints such as, for instance: • The resources must be located on a surface that is limited by the walls of the shop. • Some resources must be located near the entrance (or the exit) of the shop, for practical reasons. • The location of some resources is fixed. This is the case when the machines are particularly heavy and require a floor that is reinforced. • Some pairs of resources must be close to each other since they work together. • Some pairs of machines must be far from each other. One reason could be that a machine emits vibrations that disturb the functioning of another. These constraints are some of the most frequent when solving this kind of prob- lem. A neighboring solution of a given solution is obtained either by shifting a re- source to an idle location or by permuting two resources. An example is developed in Section 10.2.5.2.

A2.4 Establishing a School Timetable

Several criteria may be proposed in such a problem such as, for instance: • Minimize the total time students have to stay at school. This criterion can be expressed as the sum over the classes and the days of the week of the difference between the time students leave school in the evening and the time they arrive at school in the morning. • Minimize the sum of the idle periods between the first and the last course of each day, for all the teachers and days of the week. 452 A Simulated Annealing

• Minimize the total number of courses that are taught outside a “normal” activ- ity period. The “normal” activity period could be the period between 8 a.m. and 5 p.m. Thus, we can use one of these criteria or a weighted sum of two of them or all of them. A feasible timetable verifies constraints like: 1. Some special courses must be taught in specific rooms (chemistry, physics, language labs, etc.). 2. A professor cannot teach more than one course at a time. 3. The timetable should fit with the courses that must be taught in the classes. 4. Each teacher should teach courses corresponding to her/his specialty. 5. Each teacher should teach a given number of hours every week. A neighboring solution of a given solution can be obtained by: • moving a course to a free period chosen at random; • permuting two courses (with the corresponding teachers); • permuting two classes that are concerned by the same course, without permut- ing the teachers. The above lists are not exhaustive.

A.3 Simulated Annealing Algorithm

A3.1 A Brief Insight into the Simulated Annealing Approach

The basic idea behind a simulated annealing algorithm is to generate step by step a sequence of solutions, without requiring an improvement of the solution at each step. Simulated annealing can keep a solution that is worse than the previous one with a . This probability diminishes when the deterioration of the crite- rion grows and when the number of solutions already generated increases. The goal of this approach is to avoid being entrapped in a subset of feasible solutions, like when using a for a multimodal function. Consider Figure A.1, for instance, that corresponds to a minimization problem. There are three levels of contour lines visible. Each type of contour line corre- sponds to a value of the criterion: the thick lines represent solutions having a crite- rion value equal to 1000, the dotted lines represent solutions having a criterion value equal to 800, etc. A.3 Simulated Annealing Algorithm 453

X5

X2 X1

X4

X0

X3

Value 1000 Value 800 Value 600 Sequence of solutions

Figure A.1 The search path in the set of feasible solutions

The local minimum values are X1 to X5. If X0 represents the initial solution, then a gradient method would generate a sequence of solutions that tends towards the local minimum X3. Using the simulated annealing approach we can visit sev- eral “basins” and possibly find a solution whose criterion is better than X3. Now, we will investigate how to decide if we should keep or reject a solution that is generated as a neighboring solution of the previous one.

A3.2 Accepting or Rejecting a New Solution

Let Sn be the previous solution and SU n )( the corresponding value of the crite- rion. A neighboring solution Sn+1 has been derived at random from Sn and

SU n+1 )( is the corresponding criterion value.

If Sn+1 is “better” than Sn , that is to say if n+1 ≤ SUSU n )()( in the case of a minimization problem and n+1 ≥ SUSU n )()( in the case of a maximization problem, we keep the solution Sn+1 as the next current solution of the sequence.

If Sn+1 is “worse” than Sn , then we take the solution Sn+1 as the next current solution of the sequence depending on the probability calculated as follows:

pn = − Δ Tnn )/(exp (A.1) where:

n =Δ ()()n+1 − SUSU n 454 A Simulated Annealing

Tn is a decreasing function of the rank n of the solution. Tn is called the “tem- perature”.

Algorithm A.1 is used to accept or reject Sn+1 when this solution is “worse” than Sn .

Algorithm A.1. 1. Generate at random a real number x on the interval [ 1,0 ] (uniform probability density).

2. Compute pn according to (A.1).

3. If n ≥ xp , then keep Sn+1 as the next current solution of the sequence, otherwise reject Sn+1

and keep Sn as the next current solution.

A3.3 Temperature

Two decisions should be made:

• Give an initial value T0 to the temperature. • Define the way the temperature reduced.

A3.3.1 Choice of the Initial Temperature

No general algorithm exists to define the initial temperature T0. Practically, sev- eral trials are necessary to achieve an acceptable value for T0. This is a value large enough to guarantee a sufficient number of iterations to reach a “good” solution, but limited to avoid a computational burden. One situation should be mentioned. Consider Relation A.1 at iteration 0:

p0 = − Δ T00 )/(exp

Therefore:

ln ()p0 = −Δ / T00

Thus:

Δ0 T0 −= ln ()p0

Assume that we are able to define the maximum value Δ max of Δ0 , or the maximum deterioration of the criterion when switching from a solution to a A.3 Simulated Annealing Algorithm 455

neighboring one. Suppose also that we choose p0 as the probability to keep a “worse” solution at the first iteration. Then:

Δ max T0 −= ln ()p0

A3.3.2 Evolution of the Temperature

The temperature decreases every Kn iterations. In this notation, n represents the rank of the current solution. Different definitions of Kn are possible:

• (a) Kn = constant. In this case, the temperature decreases periodically. If Kn = 1 the temperature decreases for each iteration. • (b) Kn = Kn–1 + constant, K0 being given. Thus, the size of the plateaus on which the temperature remains constant changes following an arithmetical progres- sion. • (c) Kn = Kn–1 / a with a < 1. (Indeed, we keep the nearest integer value.) In this case, the size of the plateaus on which the temperature remains constant evolves following roughly a geometrical progression. /1 a • (d) n = ( KK n−1 ) with a < 1. (The nearest integer value is kept.) Thus, the size of the plateaus on which the temperature remains constant grows roughly exponentially.

• (e) Kn = ln/constant ()Tn . (The nearest integer value is retained.) In this case, the size of the plateaus on which the temperature remains constant evolves roughly logarithmically. In this formula, Tn is the temperature at iteration n. Rule (a) is the most often used and seems to be efficient for solving most prob- lems. When the temperature decreases, several rules are possible:

1. Tn = Tn – 1 – constant;

2. = TaT nn −1 with a < 1;

3. Tn = + n )1(/constant ;

4. Tn = + n )1(ln/constant . Rule 2 is the most popular. In Algorithm A.2 presented in Section A3.3.4, we use Kn = K (i.e., we keep the same temperature during K successive iterations) and Rule 2 to modify the tem- perature at each iteration. 456 A Simulated Annealing

A3.3.3 How to End the Computation?

Three tests are possible to stop the computation: • When the temperature becomes less than a given valueε . • When the number of iterations exceeds a given value W. • When no improvement occurs after a given number Z of iterations. In the algorithm presented in the next section, we stop the computation when the temperature becomes less than a given value ε .

A3.3.4 Simulated Annealing Algorithm

The simulated annealing algorithm presented hereafter can be modified to change the rules that define Kn and Tn. We also presume that the goal is to minimize the criterion.

In this algorithm, we chose Kn = K and = TaT nn −1 with a < 1 and T is the cur- rent temperature. In this algorithm, S* is the best solution in the sequence of solu- tions generated so far.

Algorithm A.2. (Simulated Annealing) 1. Introduce KaT ,,, ε .

2. Generate at random a feasible solution S0, calculate the corresponding value ()SU 0 of the

criterion and set * = SS 0 , ()* = ( SUSU 0 ) . 3. Set k = 0. 4. Set k = k + 1.

5. Generate at random a feasible solution S1 in the neighborhood of S0 and compute ()SU 1 .

6. Compute ()1 −=Δ ()SUSU 0 . 7. Test: 7.1. If Δ ≤ 0 :

7.1.1. Set = SS 10 and ( 0 ) = ( SUSU 1 ) .

7.1.2. If ()1 < (SUSU * ), then set * = SS 1 and ( * ) = ( SUSU 1 ). 7.2. If >Δ 0 , then do: 7.2.1. Generate at random x ∈ ]1,0[ (uniform distribution). 7.2.2. Compute p = − Δ T )/(exp .

7.2.3. If ≤ px , then set = SS 10 and ( 0 ) = ( SUSU 1 ) . 8. If ≥ Kk do:

8.1. Set = TaT . 8.2. Set k = 0. 8.3. If T ≥ ε then go to 4.

9. Display S* and ()SU * . A.5 Recommended Reading 457

If practically, it is impossible to generate a feasible initial solution because of the complexity of the process, we start with a solution that is not feasible and compensate by penalizing the criterion.

A.4 Conclusion

Two noteworthy advantages of simulated annealing are:

• It is easy to program, whatever the rules used to define Kn, Tn or the test chosen to stop the computation. • The solutions obtained when running this algorithm several times with the same data are of similar quality (i.e., close criterion values), but they may differ from each other. This allows users to choose a solution among several “good” solutions according their experience in the field.

A.5 Recommended Reading

Azencott R (1992) Simulated Annealing Parallelization Techniques. John Wiley & Sons, New York, NY Cerny V (1985) A thermodynamical approach to the travelling salesman problem: an efficient simulation algorithm. J. Opt. Th. Appl. 45:41–51 Darema F, Kirkpatrick S, Norton VA (1987) Parallel for chip placement by simulated annealing. IBM J. Res. Dev. 31(3):391–402 Das A, Chakrabarti BK (eds) (2005) and Related Optimization Methods. Lecture Notes in Physics 679, Springer, Heidelberg De Vicente J, Lanchares J, Hermida R (2003) Placement by thermodynamic simulated annealing. Phys. Lett. A 317(5–6):415–423 Eglese RW (1990) Simulated annealing: a tool for operational research. Eur. J. Oper. Res. 46:271–281 Harhalakis G, Proth J-M, Xie XL (1990) Manufacturing cell design using simulated annealing: an industrial application. J. Intell. Manuf. 1(3):185–191 Johnson DS, Aragon CR, McGeoch LA, Schevon C (1989) Optimization by simulated anneal- ing: an experimental evaluation; Part I: Graph partitioning. Oper. Res. 37(6):865–892 Kirkpatrick S, Gelatt CD, Vecchi MP (1983) Optimization by simulated annealing. Science 220(4598):671–680 Metropolis N, Rosenbluth A, Rosenbluth M, Teller A, Teller E (1953) Equation of state calcula- tions by fast computing machine. J. Chem. Phys. 21:1087–1092 Proth J-M, Souilah A (1992) Near-optimal layout algorithm based on simulated annealing. Int. J. Syst. Aut.: Res. Appl. 2:227–243 Tam KY (1992) A simulated annealing algorithm for allocating space to manufacturing cells. Int. J. Prod. Res. 30(1):63–87 Ware JM, Thomas N (2003) Automated cartographic map generalisation with multiple operators: a simulated annealing approach. Int. J. Geogr. Inf. Sci. 17(8):743–769 Weinberger E (1990) Correlated and uncorrelated fitness landscapes and how to tell the differ- ence. Biolog. Cybern. 63(5):325–336 Appendix B

B.1 Dynamic Programming (DP) Formulation

B.1.1 Optimality Principle1

The optimality principle is the basis of dynamic programming (DP). It can be for- mulated as follows: Let BAC ),( be an optimal path between two points A and B and let X a point belonging to this path. Then the part of the path joining X to B , denoted BXC ),( , is an optimal path between X and B . To make the optimality principle easy to understand, assume that BXC ),( is not optimal and denotes by BXC ),(* an optimal path joining X to B (see Figure B.1). In this case, o BXCXAC ),(*),( would be better than BAC ),( = o BXCXAC ),(),( , where “ o ” denotes the concatenation operator. This is at variance with the initial assumption.

C ( A, B ) C ( X, B ) X

A B

C* ( X, B )

Figure B.1 Illustration of the optimality principle

1 Also known as the “Bellman principle”. 460 B Dynamic Programming

A common expression of the optimality principle is: Every optimal control is composed of partial optimal controls. Note: the reverse of this statement is not true: the concatenation of partial opti- mal controls is usually not an optimal control, except if the elements on which the partial controls apply are independent from each other. In other words: The global optimum is not the concatenation of local optima.

B.1.2 General DP Problem: Characteristics and Formulation

B.1.2.1 Recursive Problems and Definitions

Let xP )( be the set of predecessors of x and xS )( the set of successors of x . A dynamic programming approach applies to recursive problems that meet the following characteristics: 1. The system is made up of a finite number of states. 2. Two particular states should be mentioned:

– The initial state xI characterized by xP I )( = ∅ , which means that the

initial state cannot be reached from another state ( xI does not have predecessors).

– The final state xF characterized by xS F )( = ∅ , which means that any

state can be reached starting from the final state ( xF does not have succes- sors).

3. For any state ≠ xx I and ≠ xx F we have xP )( ≠ ∅ and xS )( ≠ ∅ .

4. Let ≠ xx F be a state. There exists a decision that, when applied to x , leads to any ∈ xSy )( .

5. Any ≠ xx I results from a decision applied to a state ∈ xPz )( . 6. Whatever the state x of the system, there does not exist a sequence of deci- sions that leads to x again. If we represent the system by a set of nodes (a node represents a state), also called vertices, and a set of directed arcs (an arc joins a node x to a node y if ∈ xSy )( ), we obtain a connected digraph. This di- graph does not contain a circuit (directed circuit).

Finally, starting from the initial state xI , a finite sequence of feasible decisions always exist to reach the final state xF . A positive real value, which can be a distance, a cost or any other characteris- tic, is associated with each decision. B.1 Dynamic Programming (DP) Formulation 461

20 13 x x1 x3 10 12 15 5 12 5 x 4 11 x 11 5 3 7 x9 2 xF 14 9 4 x6 10 x8 x2 11 x 8 12 I 10 7 25 17 x5

Figure B.2 Illustration of the general dynamic programming problem

As mentioned above, such a system can be represented by a connected digraph as in Figure B.2. The nodes of the digraph represent the states of the system. The directed arcs join the nodes to their successors. They represent the decisions that are made in the states represented by the origins of the arcs to reach the states represented by the ends of the arcs. The values associated to the arcs represent the “costs” of the decisions.

A path is a sequence of nodes that starts with the initial xI and ends with the final xF . The length of a path is the sum of the values associated with the arcs that be- long to the path. For instance, in Figure B.2 the length of the path

{}I 9742 ,,,,, xxxxxx F is 8 + 9 + 11 + 11 + 5 = 44. The objective is to find an optimal path: a path that has the minimum (or the maximum) length, depending on the type of problem at hand.

Consider a node x ( ≠ xx I ) of the digraph and assume that we know the opti- mal path that joins xI to any predecessor y of x ( ∈ xPy ))( . Let yK )( be the length of this optimal path. According to the optimality principle:

xK )( = Opt {}+ xywyK ),()( (B.1) ∈ xPy )( where xyw ),( is the “cost” associated to the directed arc xy ),( .

Note that xK I = 0)( . Indeed, (B.1) can be applied only if yK )( is known for all ∈ xPy )( . Thus, a constraint applies to the order the optimal “costs” are computed. Furthermore, each time a new “cost” is computed, we preserve the predecessor that led to the optimum, as seen in the examples presented in the next subsection. This is neces- sary to build up the optimal path when xK F )( is computed. This approach is il- lustrated in Figure B.3. 462 B Dynamic Programming

y1

K( y1 ) y2 w( y1, x )

K( y2 ) w( y2, x ) y3 xI x K( y3 ) w( y3, x )

w( y4, x ) K( y4 ) y 4 Figure B.3 Forward dynamic programming approach

In the next subsection, we explain why it is described as the forward dynamic programming approach.

B.1.2.2 Forward Step

For forward formulation, the solution is built starting from the initial node. The algorithm is given hereafter.

Algorithm B.1.

1. Set xK I = 0)( (initialization). 2. Select x such that yK )( has been already computed for every ∈ xPy )( . 3. Apply Equation B.1 to compute xK )( and denote by xp )( the predecessor of x that lead to the optimum. Note that several such predecessors may exist, which means that several optimal solutions are available. 4. Test:

4.1. If ≠ xx F , then go to 2.

4.2. If = xx F , then do:

4.2.1. xK F )( is the value associated with the optimal sequence. 4.2.2. Build the optimal sequence backward:

* * * * * * F 1 = F 2 = 1 L n = −1 In = xpxxpxxpxxpxx n )(),(,),(),(,

* * * * 5. Display the optimal sequence nnI −1 L 2 1 ,,,,,, xxxxxx F and the associated cost xK F )( .

Remark: If several predecessors of a given node lead to the optimum of (B.1), the same number of optimal sequences can be built at Stage 4.2.2 (see Example 2). B.1 Dynamic Programming (DP) Formulation 463

Example 1: A Minimization Problem using a Forward DP Approach Consider the digraph presented in Figure B.2 and assume that we are interested in finding the shortest path between xI and xF .

We start by setting xK I = 0)( . Then consider the nodes the predecessors of which have been previously proc- essed and apply (B.1). The node that provides the minimum value is given be- tween parentheses.

1 I += I xxwxKxK 1 ),()()( = 0 + 3 = 3 ( xI )

2 I += I xxwxKxK 2 ),()()( = 0 + 8 = 8 ( xI )

3 1 += xxwxKxK 31 ),()()( = 3 + 13 = 16 ( x1 )

xK 4 = {}1 + 41 I + I 4 2 + xxwxKxxwxKxxwxK 42 ),()(),,()(),,()(Min)(

= {}=+++ 898,140,53Min (x1 )

xK 5 = {}I + I 5 2 + xxwxKxxwxK 52 ),()(),,()(Min)(

= {}=++ 17108,170Min (xI )

xK 6 = {}2 + 62 5 + xxwxKxxwxK 65 ),()(),,()(Min)(

= {}=++ 19717,118Min (x2 )

xK 7 = {}3 + 73 4 + xxwxKxxwxK 74 ),()(),,()(Min)(

= {}+ =+ 19118,1216Min (x4 )

xK 8 = {}4 + 84 6 + 86 5 + xxwxKxxwxKxxwxK 85 ),()(),,()(),,()(Min)(

= {}+++ = 101217,1019,28Min (x4 )

xK 9 = {}3 + 93 7 + xxwxKxxwxK 97 ),()(),,()(Min)(

= {}+ =+ 301119,1516Min (x7 )

xK 10 = {}3 + 103 9 + xxwxKxxwxK 109 ),()(),,()(Min)(

= {}+ =+ 35530,2016Min (x9 )

⎧ 10 + 10 F 9 + 9 xxwxKxxwxK F ),,()(),,()( ⎫ xK F = Min)( ⎨ ⎬ ⎩ 8 + 8 F 5 + 5 xxwxKxxwxK F ),()(),,()( ⎭

= {}+ ++ + = 142517,410,530,1235Min (x8 ) 464 B Dynamic Programming

As shown, the length of the shortest path is 14. We have now to build the shortest path backward. The last node of the path is xF . The node kept when processing xF is x8 : this node will precede xF in the shortest path.

The node kept when processing x8 is x4 : this node will precede x8 in the shortest path.

The node kept when processing x4 is x1 : this node will precede x4 in the shortest path.

The node kept when processing x1 is xI : this node will precede x1 in the shortest path.

Finally, the shortest path is < I 841 ,,,, xxxxx F > .

Example 2: A Maximization Problem Using a Forward DP Approach

Assume now that we are interested in computing the longest path between xI and xF in the digraph represented in Figure B.2. The process is the same as previ- ously after replacing Min by Max and keeping the node that lead to the maxi- mum each time a node is processed. We obtain:

xK I = 0)(

1 I += I xxwxKxK 1 ),()()( = 0 + 3 = 3 ( xI )

2 I += I xxwxKxK 2 ),()()( = 0 + 8 = 8 ( xI )

3 1 += xxwxKxK 31 ),()()( = 3 + 13 = 16 ( x1 )

xK 4 = {}1 + 41 I + I 4 2 + xxwxKxxwxKxxwxK 42 ),()(),,()(),,()(Max)(

= {}+ ++ = 1798,140,53Max (x2 )

xK 5 = {}I + I 5 2 + xxwxKxxwxK 52 ),()(),,()(Max)(

= {}+ =+ 18108,170Max (x2 )

xK 6 = {}2 + 62 5 + xxwxKxxwxK 65 ),()(),,()(Max)(

= {}+ =+ 25718,118Max (x5 )

xK 7 = {}3 + 73 4 + xxwxKxxwxK 74 ),()(),,()(Max)(

= {}+ =+ 281117,1216Max (x3 and x4 )

xK 8 = {}4 + 84 6 + 86 5 + xxwxKxxwxKxxwxK 85 ),()(),,()(),,()(Max)(

= {}+ ++ = 351218,1025,217Max (x6 )

xK 9 = {}3 + 93 7 + xxwxKxxwxK 97 ),()(),,()(Max)( B.1 Dynamic Programming (DP) Formulation 465

= {}+ =+ 391128,1516Max (x7 )

xK 10 = {}3 + 103 9 + xxwxKxxwxK 109 ),()(),,()(Max)(

= {}+ =+ 44539,2016Max (x9 )

⎧ 10 + 10 F 9 + 9 xxwxKxxwxK F ),,()(),,()( ⎫ xK F = Max)( ⎨ ⎬ ⎩ 8 + 8 F 5 + 5 xxwxKxxwxK F ),()(),,()( ⎭

= {}+ ++ + = 562518,435,539,1244Max (x10 )

The length of the longest path is 56. We have now to build this path backward.

The last node of the path is xF . The node kept when processing xF is x10 : this node will precede xF in the shortest path.

The node kept when processing x10 is x9 : this node will precede x10 in the longest path.

The node kept when processing x9 is x7 : this node will precede x9 in the longest path.

When processing x7 we have to keep two nodes: x3 and x4 . Thus, we will ob- tain 2 longest paths.

Path 1:

The node kept when processing x3 is x1 : this node will precede x3 in the first longest path.

The node kept when processing x1 is xI : this node will precede x1 in the first longest path.

Finally, the first longest path is < I 109731 ,,,,,, xxxxxxx F > .

Path 2:

The node kept when processing x4 is x2 : this node will precede x4 in the second longest path.

The node kept when processing x2 is xI : this node will precede x2 in the sec- ond longest path.

Finally, the second longest path is < I 109742 ,,,,,, xxxxxxx F > .

B.1.2.3 Backward Formulation

Consider a node x ( ≠ xx F ) of the digraph and assume that we know the optimal path that joins any ∈ xSy )( to xF . Let yL )( be the length of this optimal path. According to the optimality principle: 466 B Dynamic Programming

xL )( = Opt {}+ yxwyL ),()( (B.2) ∈ xSy )( where yxw ),( is the “cost” associated to the directed arc yx ),( .

Note that xL F = 0)( . In this approach, the “cost” related to a node can be computed only if the “costs” of every successors of this node have been computed previously. The computation stops when xL I )( is obtained. Each time (B.2) is used to compute the “cost” xL )( of a node x , we store in xs )( the successor(s) of x that provide the optimal value: these nodes will be used to build the optimal sequence of nodes. This backward approach is illustrated in Figure B.4. The algorithm is given hereafter.

Algorithm B.2.

1. Set xL F = 0)( (initialization). 2. Select x such that yL )( has already been computed for every ∈ xSy )( . 3. Apply Equation B.2 to compute xL )( and denote by xs )( the successor of x that lead to the optimum. Note that several such successors may exist, which means that several optimal solutions are available. 4. Test:

4.1. If ≠ xx I , then go to 2.

4.2. If = xx I , do:

4.2.1. xL I )( is the value associated with the optimal sequence. 4.2.2. Build the optimal sequence forward: * * * * * * I 1 = I 2 = 1 L = −1 Fnn = xsxxsxxsxxsxx n )(),(,),(),(,

5. Display the optimal sequence and the associated “cost” xL I )( .

y w ( x, y1 ) 1 L ( y1 )

y2 L ( y2 )

w ( x, y2 ) y3 x L ( y3 ) xF

y4

w ( x, y3 ) L ( y4 ) L ( y5 ) w ( x, y4 ) y5

w ( x, y5 )

Figure B.4 Backward dynamic programming approach B.1 Dynamic Programming (DP) Formulation 467

Remark: If several successors of a given node lead to the optimum of (B.2), the same number of optimal sequences can be built at Stage 4.2.2 (see Example 3).

In Example 3, the backward DP approach is applied to find the longest path in the digraph presented in Figure B.2.

Example 3: A Maximization Problem Using a Backward DP Approach We obtain successively:

xL F = 0)(

10 = 10 xxwxL F = 12),()( (xF )

xL 9 = 9 F 109 + xLxxwxxw 10 ))(),(),,((Max)(

= + = 17)125,5(Max (x10 )

8 = 8 xxwxL F = 4),()( (xF )

6 = 86 xLxxwxL 8 =+ + =14410)(),()( (x8 )

7 = 97 + xLxxwxL 9 = + = 281711)(),()( (x9 )

xL 5 = 5 F 85 + 8 65 + xLxxwxLxxwxxw 6 ))(),(),(),(),,((Max)(

= =++ 25)147,412,25(Max (xF )

xL 4 = 74 + 7 84 + xLxxwxLxxw 8 ))(),(),(),((Max)(

= + =+ 39)42,2811(Max (x7 )

xL 3 )(

= 73 + 7 93 + 9 103 + xLxxwxLxxwxLxxw 10 ))(),(),(),(),(),((Max

= + ++ = 40)1220,1715,2812(Max (x7 )

xL 2 )(

= 42 + 4 62 + 6 52 + xLxxwxLxxwxLxxw 5 ))(),(),(),(),(),((Max

= +++ = 48)2510,1411,399(Max (x4 )

xL 1 = 31 + 3 41 + xLxxwxLxxw 4 ))(),(),(),((Max)(

= + =+ 53)395,4013(Max (x3 ) 468 B Dynamic Programming

xL = + + xLxxwxLxxw ),(),(),(),((Max)( I I 1 1 I 4 4 I 2 + 2 I 5 + xLxxwxLxxw 5 ))(),(),(),(

= + ++ + = 56)2517,488,3914,533(Max (, xx 21 )

There are two optimal sequences of nodes. One of them contains x1 and the second contains x2 . Both are generated forward and start with xI . We obtain:

< I 109731 ,,,,,, xxxxxxx F > and < I 109742 ,,,,,, xxxxxxx F > .

Indeed, the optimal “cost” is 56.

B.2 Illustrative Problems

The difficulty encountered when facing a problem is to recognize its nature, that is to say the type of approach that could help to solve it. This is particularly true when DP is a possible approach. In this section, we develop some well-known problems that can be solved using DP.

B.2.1 Elementary Inventory Problem

A company wants to establish a production schedule for an item during the next H elementary periods, an elementary period being either a day or a week or a month, depending on the type of production. H is the horizon of the problem. The manufacturing time to produce a batch of items is negligible. The notations used to set and analyze the problem are the following, for = L,,1 Hi :

• xi : Production scheduled during the i-th period. This production becomes

available at the end of the period. The variables xi are the control of the sys- tem.

• di : Demand requirement at the end of period i . These demands are known.

• yi : Inventory level during period i +1 . The following state equation characterizes the evolution of the system:

−1 iiii =−+= L,,1, Hidxyy (B.3)

Two sets of constraints apply: B.2 Illustrative Problems 469

i =≥ L,,1,0 Hix (B.4)

These constraints mean that the production cannot be negative.

i =≥ L,,1,0 Hiy (B.5)

These constraints mean that backlogging is not allowed.

We also know y0 that is the inventory level at the beginning of the first period (initial inventory level).

A feasible solution (or control) is given by = { 1 K,, xxX H } that verifies Rela- tions B.3–B.5. Two sets of costs are taken into account:

• ii −1 = L,,1),( Hiyf , which are the costs for keeping in stock a quantity

yi−1 during the i -th period. These inventory costs are concave and non- decreasing.2

• ii = L,,1),( Hixc , which are the costs for manufacturing a quantity xi during the i -th period. These production costs are also concave and non- decreasing. Since the costs depend on the periods, we are in the non-stationary case (more general). Thus, the cost associated with a feasible control = { 1 L,, xxX H } is:

H = ∑ i −1 + xcyfXC iii ])()([)( (B.6) i=1

* * The objective is to find a feasible control = { 1 L,,* xxX H } such that:

XC )*( = Min XC )( ∈EX where E is the set of feasible controls. It is easy to see that there exists an infinite number of feasible solutions. X * is an optimal control. Note that if

H H y σ10 =≥ ∑ di i=1

2 xf )( is concave and non-decreasing if ( xf ) increases from Δf when x increases from Δx and 0.Δ≤Δ≤ xf 470 B Dynamic Programming

then, the optimal solution is xi = 0 for = L,,1 Hi . Analyzing this problem led to fundamental properties that exempt some of the elements of E from consideration.

Fundamental Properties

There exists an optimal control that has the following properties, for = L,,1 Hi :

H 1. If −1 < dy ii , then {}−1, −+−∈ yddydx iiiiii −+ 11 L,, σ i − yi−1 .

s s s In this formulation, σ r = ∑ di and σ r = 0 if s < r . =ri The first property can be expressed as follows: if the inventory level during a period is not large enough to meet the next demand, then the quantity to pro- duce during the period must be such that the sum of the inventory level and the quantity produced meets exactly a sequence of successive demands, the first of them being the next one. Indeed, the optimal number of successive demands covered by this sum is not given. This property is illustrated in Figure B.5.

2. If −1 ≥ dy ii and there exist < ij such that x j > 0 , then xi = 0 .

In other words, if the inventory level during a period is enough to satisfy the demand at the end of the period and if a production run has been made previ- ously, then there is no production during this period. Note that if no production took place previously, then we may have to produce to reach the optimal solu- tion.

3. If −1 ≥ dy ii and x j = 0 for = L ij −1,,1 , then:

⎧ − + (,)( −+ yddyd (,) σ i++ 2 − y + ,) ⎫ ⎪ ii −1 iii −+ 11 i i−1 ⎪ xi ∈ ⎨ ⎬ ⎪ H −1 + H + ⎪ ⎩L (, σ i − yi−1 σ i − yi−1 )(,) ⎭ Remember that:

+ ⎧ 0 if a < 0 ()a = ⎨ ⎩ a otherwise This third property can be expressed as follows: if the inventory level during a period is large enough to satisfy the next demand, but if no production has been carried out previously, then the production during the period must be either equal to 0 or such that the sum of the inventory level and the quantity produced exactly meets a sequence of successive demands, the first of them being the next one. Indeed, the number of successive demands covered by this sum must be defined. B.2 Illustrative Problems 471

Inventory di level

di+1 xi

di+2 d y i+3 i-1 di+4 i i+1 i+2 i+3 i+4 Period di di+1 di+2 di+3 di+4

Figure B.5 Illustration of result 1

These three properties will allow us to introduce a DP formulation. Consider an elementary period ∈{}L,,1 Hi . The inventory level during this period is yi−1 . According to the possible production levels mentioned above:

i 1 +− s s+1 H i−1 {}yy −∈ 10 i i L,,,,)( σσσσ i , where s is the smallest integer such that s i 1 +− σ i y −> σ10 )( .

Let i yC )( denote the optimal value of the “cost” between elementary period i and elementary period H if the inventory level is y during period i . The backward DP formulation for the lowest inventory level is:

i 1 +− i 1 +− i yC σ10 i yf −=− σ10 ])([])([ j i 1 +− j (B.7) + Min {}[]σ ii yc σ10 )( +−− C [σ ii ++ 11 ] = L,, Hsj

The backward DP formulation for the others possible inventory levels is:

j j j C σ ii σ ii i ++= Ccf σ ii ++ 11 )()0()()( for = + L,,1, Hssj (B.8)

H Assume that y < σ 10 otherwise the optimal solution is to produce nothing on horizon H .

We set CH +1 = 0)0( since, according to the above properties and the previous assumption, the inventory level at the end of the last period is equal to 0.

Algorithm B.3. (Inventory optimization algorithm)

1. Set CH +1 = 0)0( . 2. For = Hi to 2 step –1 do: 472 B Dynamic Programming

2.1. Compute (B.7). j* i 1 +− 2.2. Set x σ ii y −−= σ10 )( , where j * achieves the minimum of (B.7). 2.3. Compute (B.8).

3. Compute (B.7) for i = .1 j* 4. Set 1 σ1 −= yx 0 , where j * achieves the minimum of (B.7).

yC 01 )( is the optimal cost. * 5. Set 1 = xx 1 . * * 6. Compute 1 +−= xdyy 110 . 7. For i = 2 to H do:

* i 1 +− * * 7.1. If i−1 yy −> σ10 )( then xi = 0 , otherwise = xx ii . * * * 7.2. Compute −1 +−= xdyy iiii .

Example Consider an example defined by the following parameters:

= 0 = 1 2 == 3 = 4 = 5 = ddddddyH 6 = 2,3,2,4,1,4,6,6

⎪⎧ 0 if x = 0 ⎫ ⎪⎧ 0 if x = 0 ⎫ i xc )( = ⎨ ⎬ i = 21, i xc )( = ⎨ ⎬ i = 65,4,3, ⎩⎪ 1 if xx >+ 0 ⎭ ⎩⎪ 2 if xx >+ 0 ⎭ ⎪⎧ 0 if x = 0 ⎫ ⎪⎧ 0 if x = 0 ⎫ i xf )( = ⎨ ⎬ i = 21, i xf )( = ⎨ ⎬ i = 65,4,3, ⎩⎪ 5.0 if xx > 0 ⎭ ⎩⎪ 1.0 if xx > 0 ⎭

Applying Algorithm B.3 we obtain:

Step 1: Hi == 6

s 5 + We search for s that is the smallest integer such that σ 6 y −> σ 10 )( . We ob- tain s = 6 . As a consequence:

6 6 6 ++= CcfC 7 = + + = 4040)0()2()0()0( and x6 = 2

6 6 6 ++= CcfC 7 = + + = 2.0002.0)0()0()2()2(

Step 2: i = 5

s 4 + We search for s that is the smallest integer such that σ 5 y −> σ 10 )( . We ob- tain s = 5 . As a consequence:

5 fC 5 += 5 + 6 5 + CcCc 6 = 2.7])2()5(),0()3([Min)0()0( and x5 = 5 B.2 Illustrative Problems 473

5 5 5 ++= CcfC 6 = 3.4)0()0()3()3(

5 5 5 ++= CcfC 6 = 7.0)2()0()5()5(

Step 3: i = 4

s 3 + We search for s that is the smallest integer such that σ y −> σ104 )( . We ob- tain s = 4 . As a consequence:

4 fC 4 += 4 + 5 4 + 5 4 + CcCcCc 5 = 7.9])5()7(),3()5(),0()2([Min)0()0( and x4 = 7

4 4 4 ++= CcfC 5 = 4.7)0()0()2()2(

4 4 4 ++= CcfC 5 = 8.4)3()0()5()5(

4 4 4 ++= CcfC 5 = 4.1)5()0()7()7(

Step 4: i = 3

s 2 + We search for s that is the smallest integer such that σ 3 y −> σ10 )( . We ob- tain s = 3 . As a consequence:

3 fC 3 += 3 + 4 3 + 4 3 + CcCcCc 4 ),5()8(),2()5(),0()3([Min)1()1(

3 + Cc 4 = 5.13])7()10( and x3 = 10

3 3 3 ++= CcfC 4 = 1.10)0()0()4()4(

3 3 3 ++= CcfC 4 = 8)2()0()6()6(

3 3 3 ++= CcfC 4 = 7.5)5()0()9()9(

3 3 += 3 + CcfC 4 = 5.2)7()0()11()11(

Step 5: i = 2

s 1 + We search for s that is the smallest integer such that σ 2 y −> σ10 )( . We ob- tain s = 3 . As a consequence:

2 fC 2 += 2 + 3 2 + 3 2 + CcCcCc 3 ),9()8(),6()5(),4()3([Min)2()2(

2 + Cc 3 = 5.14])11()10(

and x2 = 10

2 2 2 ++= CcfC 3 = 6.12)4()0()5()5( 474 B Dynamic Programming

2 2 2 ++= CcfC 3 = 5.11)6()0()7()7(

2 2 += 2 + CcfC 3 = 7.10)9()0()10()10(

2 2 += 2 + CcfC 3 = 5.8)11()0()12()12(

Step 6: i = 1

s 0 + We search for s that is the smallest integer such that σ1 y −> σ10 )( . We ob- tain s = 3 . As a consequence:

1 fC 1 += 1 + 2 1 + 2 1 + CcCcCc 2 ),10()8(),7()5(),5()3([Min)6()6(

1 + Cc 2 = 6.19])12()10(

and x1 = 3 Thus, the optimal cost is 19.6.

Forward Process Now, we start the forward process (Steps 4 to 7 of the algorithm) to build the op- timal solution. This process is summarized in Table B.1.

Table B.1 Building the optimal solution

i 0 1 2 3 4 5 6 i 1 +− 6 2 1 0 0 0 y0 −σ1 )(

di 4 1 4 2 3 2 * 3 0 0 7 0 0 xi * 6 5 4 0 5 2 0 yi

The solution is represented in Figure B.6.

B.2.2 Capital Placement

A finance company decides to invest Q million euros in N projects. Each project can be developed at any of K different investment levels. Investing at level kn ∈{}L,,1 K in project n ∈{ L,,1 N } has a cost kv nn )( and the future earn- ings are estimated to be kb nn )( . Indeed, kb nn )( > kv nn )( . B.2 Illustrative Problems 475

Inventory level 9 8 x* = 3 7 1 d1 = 4 6 d4 = 2 5 d2 = 1 * 4 x4 = 7 d5 = 3 3

2 d3 = 4 1 d6 = 2

1 2 3 4 5 6 Period Figure B.6 Optimal inventory level

The problem can be formulated as follows:

N Maximize ∑ kb nn )( n=1 subject to:

N ∑ n n )( = Qkv and kn ∈{}L,,1 K n=1

A forward DP approach can be used to solve this problem. Let n qS )( be the maximum earning for the first n projects if the investment is q . The DP formulation is:

n qS )( = Max {}+ nnn −1 − kvqSkb nn ))(()( (B.9) kn

* Let n qk )( denote the optimal investment level in project n if the investment is q for the first n projects. We also set:

0 qS q ≥∀= 0,0)(

Indeed, the problem is to define, in each step of the computation (i.e., for each n ), the value of q . 476 B Dynamic Programming

Table B.2 Capital placement data

n 1 2 3

k 1 kv )( 1 kb )( 2 kv )( 2 kb )( 3 kv )( 3 kb )( 0 0 0 0 0 0 0 1 2 7 3 4 2 5 2 5 9 6 8 4 6 3 6 10 7 10 6 7 4 7 11 9 12 8 10

We illustrate this approach with a small example, where Q = 10 and the rest of the data is given in Table B.2. * We apply Relation B.9 successively to n = and2,1 3 . The notation i zk )( denotes the optimal investment level for project i if the total investment for the i -th first projects is z .

Step 1:

* S1 = 0)0( and k1 = 0)0( * S1 = and7)2( k1 = 1)2( * S1 = and9)5( k1 = 2)5( * S1 = and10)6( k1 = 3)6( * S1 = and11)7( k1 = 4)7(

Step 2:

* S2 = 0)0( and k2 = 0)0( * 2 2 SbS 1 =+= 7)2()0()2( and k2 = 0)2( * 2 2 SbS 1 =+= 4)0()1()3( and k2 = 1)3( * S2 = 2 + 1 2 SbSb 1 =+ 11])5()0(),2()1([Max)5( and k2 = 1)5( * S2 = 2 + 1 2 SbSb 1 =+ 10])0()2(),6()0([Max)6( and k2 = 0)6( * S2 = 2 + 1 2 SbSb 1 =+ 11])0()3(),7()0([Max)7( and k2 = 0)7( * S2 = 2 + 1 2 SbSb 1 =+ 15])5()1(),2()2([Max)8( and k2 = 2)8(

S2 = 2 + 1 2 + 1 2 + SbSbSb 1 = 71])0()4(),2()3(),6()1([Max)9( and * k2 = 3)9( * 2 2 SbS 1 =+= 15)7()1()10( and k2 = 1)10( B.2 Illustrative Problems 477

Step 3:

S3 = 3 + 2 3 + 2 3 + SbSbSb 2 ),6()2(),8()1(),10()0([Max)10( * 3 Sb 2 =+ 20])2()4( and k3 = 1)10( Thus, the maximal sum of future earnings is 20. The optimal strategy has to be reconstructed forward. * Since k3 = 1)10( , the investment in project 3 should be done at level 1. The maximum for step 3 is obtained for S2 )8( . * Since k2 = 2)8( , the investment in project 2 should be done at level 2. The maximum for step 2 is obtained for S1 )2( . * Since k1 = 1)2( , the investment in project 1 should be done at level 1.

B.2.3 Project Management

B.2.3.1 Definitions and Examples

A project is a set of tasks on which a partial order applies. This partial order can be of different types: 1. Start task A after task X is completed. 2. Start task A t units of time after the completion of task X. 3. Start task A after the starting time of task X.

Table B.3 A project

Task Duration Constraints A 6 / B 11 / C 8 / D 7 Starts after C is completed E 14 Starts after B is completed F 7 Starts after B is completed G 8 Starts after A is completed H 2 Starts after A is completed I 17 Starts after D and E are completed J 5 Starts 7 units of time after D and E are completed K 9 Starts after I and H are completed and 5 units of time after F and G are completed

478 B Dynamic Programming

7 25 8 D 32 7 5 O J 8 C 51 I 9 E 14 17 K 11 18 11 7 5 I B F 42 8 A G 2 6 H 6

Figure B.7 Graphic representation of the project

A task can be represented by a directed arc. The beginning of the arc represents the starting time of the task, the end of the arc is the completion time and the length (or weight) of the arc represents the duration of the task. Finally, a project can be represented by a digraph. To illustrate the concept, let us consider a project described in Table B.3. This project is represented in Figure B.7 by an acyclic directed graph. A forward DP approach is applied to find the longest path between nodes I and O. The results of the computation are the framed values. Thus, 51 units of time are necessary to complete this project. A backward approach shows that the critical path is B, E, I, K. Thus, to reduce the duration of the project, we have to reduce the duration of one or more of these 4 tasks, until another critical path appears.

B.2.3.2 Earliest and Latest Starting Times of Tasks

It is important to define these two limits since, as long as a task starts between the earliest and latest starting times, neither the schedule of the other tasks nor the du- ration of the whole project are disturbed. For a task to start, all the tasks that precede it should be completed. As a conse- quence, the earliest starting times of the tasks are the results of the computation of the longest path between nodes I and O (see Figure B.7). The computation of the latest starting times of tasks begins with the tasks with- out successors. We proceed backwards. The duration of the task under considera- tion is deduced from the minimum among the latest starting time of its successors. Indeed, it is assumed that the latest starting times of the successors have been computed earlier. Table B.4 provides these starting times for the example given in Table B.3 and Figure B.7. B.2 Illustrative Problems 479

Table B.4 Earliest and latest starting times

Task Earliest starting time Latest starting time A 0 Min ( 29, 40 ) – 6 = 23 B 0 Min ( 30, 11 ) – 11 = 0 C 0 18 – 8 = 10 D 8 Min ( 39, 25 ) – 7 = 18 E 11 25 – 14 = 11 F 11 42 – 5 – 7 = 30 G 6 42 – 5 – 8 = 29 H 6 42 – 2 = 40 I 25 42 – 17 = 25 J 32 51 – 5 = 46

K 42 51 – 9 = 42

Note that the earliest and the latest starting times of the tasks belonging to the critical sequence are equal.

B.2.3.3 PERT (Program Evaluation and Review Technique) Method

The PERT method is a DP method that involves three estimates of time for each task:

1. Optimistic t0 .

2. Mean tm .

3. Pessimistic t p . These estimations are used to define the duration t * of the task:

4 ++ ttt t* = 0 pm 6

The duration of the project is computed using either the backward or the for- ward algorithm.

B.2.3.4 CPM (Critical Path Method)

This method finds a good tradeoff between the duration and basic cost of a pro- ject. The basic idea is that each task can be performed in one of two possible ways: 480 B Dynamic Programming

1. Using a standard mode that leads to a medium cost and can incur quite signifi- cant project time. 2. Using an urgent mode that leads to a high cost (since additional resources have been used) but short project time. Assume that the cost is a linear function of time. The goal is to reduce the dura- tion of some critical tasks. The following approach can be applied. The task with the greatest ratio (decrease of task duration) / (increase of cost) is selected. The duration of this task is reduced. Then, the critical sequence of tasks is again sought (it may be different from the previous one) and so on. The process stops when the time of the project is acceptable.

B.2.4 Knapsack Problem

The term “knapsack” is derived from the activity that consists in selecting items to pack in a knapsack, each item being defined by two parameters:

• The value of each item i , denoted by u i . This value represents the usefulness of the item for the traveler.

• The weight of each item i , denoted by wi . The goal is to select, among a set of n items, the subset of items that maxi- mizes the total value of the selected items while keeping the total weight less than a given value M . We introduce the following decisions variables:

⎧ 1 if item i is selected ⎫ ⎪ ⎪ xi = ⎨ ⎬ for = L,,1 ni ⎪ ⎪ ⎩ 0 otherwise ⎭

The problem can be formulated as:

n Maximize ∑ ux ii i=1 subject to:

n ∑ i i ≤ Mwx i=1 and: xi ∈{}1,0 for ∈ L,,1 ni B.3 Recommended Reading 481

This problem is the bounded knapsack problem since only one unit of each item is available. This problem can be expressed as a DP problem. We can apply the logic used in the capital placement problem.

Let i mS )( be the maximum value of the first i items selected when the maximum weight is m . The dynamic programming formulation is then:

i mS )( = Max {}i−1 − )( + uxwxmS iii for = L,,1 ni x

B.3 Recommended Reading

Adda J, Cooper R (2003) Dynamic Economics. MIT Press, Cambridge, MA Bensoussan A, Proth J-M (1984) Inventory planning in a deterministic environment: concave cost set-up. Larg. Scal. Syst. 6:177–184 Bensoussan A, Crouhy M, Proth J-M (1983) Mathematical Theory of Production Planning. North-Holland, Amsterdam Bertsekas DP (2000) Dynamic Programming and Optimal Control. Vols. 1 & 2, 2nd edn Athena Scientific Buffa ES (1973) Modern Production Management. 4th edn, John Wiley & Sons, New York, NY Buffa ES (1976) Operations Management: The Management of Production Systems. John Wiley & Sons, New York, NY Chase RB, Aquilano NT (1981) Production and Operations Management: a Life Cycle Ap- proach. 3rd edn, R.D. Irwin, Homewood, Ill Cormen TH, Leiserson CE, Rivest RL, Stein C (2001) Introduction to Algorithms. 2nd edn, MIT Press, Cambridge, MA Dolgui A, Guschinsky N, Levin G, Proth J-M (2008) Optimisation of multi-position machines and transfer lines. Eur. J. Oper. Res. 185(3):1375–1389 Giegerich R, Meyer C, Steffen P (2004) A discipline of dynamic programming over sequence data. Sci. Comput. Progr. 51:215–263 Menipaz E (1984) Essentials of Production and Operations Management. Prentice-Hall, Engle- wood Cliffs, NJ Proth J-M, Hillion H (1990) Mathematical Tools in Production Management. Plenum Press, New York and London Stokey N, Robert EL, Prescott E (1989) Recursive Methods in Economic Dynamics. Harvard University Press, Cambridge, MA Wagner HM (1975) Principles of Operations Research. Prentice-Hall, Englewood Cliffs, NJ

Appendix C Branch-and-Bound Method

C.1 Introduction

Assume that a finite (but large) number of feasible solutions have to be examined to find the solution that has the maximal (or minimal) value of a criterion. The branch-and-bound method enumerates the feasible solutions in order to find the optimal. A large number of solutions are eliminated from consideration by setting upper and lower bounds, so only a tiny fraction of them are examined, thus reduc- ing calculation time. In Section C.2, we introduce the basis of the approach. Section C.3 presents some applications in order to illustrate the method.

C.2 Branch-and-Bound Bases

C.2.1 Find the Solution that Minimizes a Criterion

Let S be the set of feasible solutions and f • )( the criterion to minimize. The goal is to find *∈ Sx such that xf )*( = Min xf )( . ∈Sx

Assume that it is possible to define a set { 21 L,,, SSS n } of solution subsets n that are feasible or not such that ⊂ SS i . Ui=1 We also assume that:

1. Whatever i ∈{}L,,2,1 n , it is possible to find a lower bound of Min xf )( . ∩∈ SSx i

Let bi be this lower bound. 2. We know an upper bound U of xf )*( 484 C Branch-and-Bound Method

Basic Remark:

If i > Ub for a i ∈{}L,,2,1 n , then the optimal solution x * cannot belong to

i ∩ SS . As a consequence,

∩∈ SSx )(* U i ∈Ei 1

where 1 { ∈= {}L and,,1 i ≤ UbniiE }. In other words, x * belongs to one of the subsets i ∩ SS for ∈ Ei 1 . 1 At the end of the first iteration, there are card E1 )( subsets to analyze, see Figure C.1. For each S , ∈ Ei , we define a set { ,,, SSS } of subsets such that i 1 ii 2,1, L ,ni i

ni

i ⊂ SS , ji . Let b , ji be a lower bound of Min xf )( . ∩∈ SSx Uj=1 , jii

If < bU , ji , then S , ji is brushed aside from the list of subsets taken into account in the second iteration.

If ≥ bU , ji , then the optimal solution x * may belong to S , ji and this subset is integrated in the list. Thus, at the end of the second iteration, we have card E2 )( subsets candidates for further investigation, where 2 = { ∈{ L nijiE },,,1),(

i ∈≤ { L },,,1, , jii ≤ UbnjUb }. We then apply the same approach to each S , ji ,

),( ∈ Eji 2 as the one applied to Si , ∈ Ei 1 . This leads to the results of the third iteration. This approach is illustrated in Figure C.2. Continue the same process until we reach subsets that are small enough to make possible the computation of the optimal solutions of each subset, and select the best one. In some cases, only one subset remains at the last iteration.

S

S∩S1 S∩S2 S∩Si S∩Sn-1 S∩Sn

U≥b1 U

Figure C.1 The first stage of a B&B approach

1 Card E1 )( is the number of subsets Si that may contain an optimal solution. C.2 Branch-and-Bound Bases 485

S

Iteration 1 S ∩S1 S ∩S2 S ∩S3 S ∩S4

S ∩S S ∩S S ∩S Iteration 2 3,1 3,2 3,3

Iteration 3 S ∩S3,2,1 S ∩S3,2,2 S ∩S3,2,3 S ∩S3,2,4

Figure C.2 The basic approach

It should be noted that: • The number of iterations required to reach an optimal solution is unpredictable, but the closer the upper bound U to xf )*( , the smaller the number of opera- tions on the average. The same conclusion can be drawn for the lower bounds. • The computation of the upper bound U is often made: – either by choosing a feasible solution at random and by assigning the value of the criterion of this solution to U ; – or applying an optimization heuristic algorithm to select a “good” solution; – or computing the optimal solution on ⊂ Ss when this optimal solution is easy to find. U can be refined for each iteration, taking advantage of the information ob- tained along the computation. • The lower bound b on a subset S is chosen as the lower bound on L,,, rji L,,, rji ∩ SS . In fact, the subsets S are often selected according to how L,,, rji L,,, rji easy it is to compute a lower bound on them.

C.2.2 Find the Solution that Maximizes a Criterion

The approach is the same as the previous one except that:

1. Whatever i ∈{}L,,2,1 n , it is possible to find an upper bound bi of xf )*( = Max xf )( . ∩∈ SSx i 2. We know a lower bound U of xf )*( . 486 C Branch-and-Bound Method

Furthermore, we remove the subsets ∩ SS i such that > bU i . We continue by applying the same process to the subsets until an optimal solu- tion can be computed. The computation of the bounds is made using the same approaches as in Sec- tion C.2.1. Finally, we have to keep in mind the difficulties encountered when applying the B&B technique for a minimization problem, namely: • Computation of a root upper bound of the criterion. • Computation of lower bounds for each node at the level under consideration. • Generating the overlapping subsets at each iteration. As mentioned before, the choice of these subsets conditions the computation of the lower bounds. Of course, the same problems appear with a maximization problem only vice versa the lower/upper bounds.

C.3 Applications of Branch-and-Bound

Some examples are presented to illustrate the B&B approach.

C.3.1 Assembly System with Random Component Lead Times

C.3.1.1. Stating the Problem

Since the goal is only to illustrate the B&B approach, we will restrict ourselves to the simplest case. Consider a cyclic assembly system that manufactures a single type of product by putting together n components. In other words, one unit of product requires one unit of each component ∈{ L,,1 ni }. The time axis is divided into consecutive elementary periods. In practice, an elementary period may represent a working day. Let qk be the period at the end of which the k-th product is expected. We denote by li the number of periods be- tween the instant component i is ordered and the instant it becomes available.

This random variable takes a value in { L ,,1 ni }. Let p ,ki denote the probability that li takes the value ∈{ ,1 Lnk i } or, in other words, the probability that a component i becomes available k periods after the instant it has been ordered. Indeed:

ni ∑ p ,ki = 1 k=1 C.3 Applications of Branch-and-Bound 487

Ordering instant for component i

qk

Time

ui = 5 Figure C.3 Lead time

The assembly time can be neglected. Two conditions are required to assemble the components: (i) all the compo- nents are available, and (ii) the elementary period during which the assembly op- eration is performed is greater than or equal to qk . Two types of costs should be taken into account:

• Inventory costs, denoted by i = L,,1, niH , which are incurred to keep in stock components ∈{ L,,1 ni } during one period. • Backlogging cost B that is incurred each time the assembly is delayed by one period because of unavailable component(s).

For = L ,,1 ni , let ui be the number of periods between the instant the com- ponent is ordered and the beginning of qk . Figure C.3 illustrates the case when ui = 5 .

Component i will be available at the beginning of period = − + luqa iiki . If

< nu ii , then we may have ≤ qa ki or > qa ki . In the first case, i will be stored at least until the beginning of qk . In the second case, backlogging is unavoidable.

Finally, if the lead times li and the order dates (derived from the control pa- rameters ui ) are known, the cost incurred for assembling product k is:

1 n 1 LL n ),,;,,( += + ZYXuullC (C.1) where:

n + = ∑ ii − luHX i )( i=1

+ = Max − ulBY jj )( = L,,1 nj

n + = ∑ i Max − ulHZ jj )( i=1 = L,,1 nj 488 C Branch-and-Bound Method

Here, X is the total inventory cost until the beginning of period qk . Y is the backlogging cost, if any. Z is the inventory cost corresponding to the components that are kept in stock after the beginning of period qk . Relation C.1 can be rewritten as:

1 n 1 LL n ),,;,,( = + WXuullC (C.2) where:

⎛ n ⎞ ⎜ += HBW ⎟ − ul )( + ⎜ ∑ i ⎟ Max jj ⎝ i=1 ⎠ = L,,1 nj

In Relation C.2, it is assumed that the lead times are known. Nevertheless, they are random variables. Therefore, we derive the average cost from (C.2) and ob- tain:

1 L n ),,( += SRuuC (C.3) where:

u n ⎡ i ⎤ = i ⎢∑∑ ( i − ) pkuHR ,ki ⎥ i =1 ⎣⎢ k =1 ⎦⎥

⎛ n ⎞ v* ⎛ ⎡ n ⎤ ⎞ ⎜ ⎜ ⎟ += i ⎟ wHBS × p ,si ⎜ ∑ ⎟ ∑∑⎜ ⎢ ∏ i ⎥ ⎟ ⎝ i=1 ⎠ wE=∈1)⎝ 1 L,,( ss wn ⎣ i=1 ⎦ ⎠

In the latter expression:

+ v* = Max ( − un jj ) = L,,1 nj

⎧ ⎫ w = ⎨ ()1 L,, sssE in ∈{}L,,1 i = L,,1for nin and Max ()jj =− wus ⎬ ⎩ = L,,1 nj ⎭

The objective is to find the set

1 L uu n ∈ (),,1),,( n1 × × ( LLL ,,1 nn ) that minimizes Expression C.3. C.3 Applications of Branch-and-Bound 489

C.3.1.2 B&B Algorithm

Actually, it is possible to compute the value of Criterion C.3 for each feasible so- lution and keep the one corresponding to the minimal criterion value. However, this strategy may lead to a huge amount of computation. In contrast, as we will see in this section, the B&B approach can reach the optimal solution with a reasonable amount of computation. The following two remarks are useful to understand the suggested B&B ap- proach: 1. Looking at Criterion C.3, it is easy to see that R is an increasing function of

the parameters ui while S is a decreasing function of these parameters. This remark allows us to find a lower bound of the optimal solution in a subset ϖ of feasible solutions. Assume that this subset is defined as follows:

∈{}ii L,, Mmu i for = L,,1 ni . Then, we can choose = + SRb ϖϖϖ as a

lower bound of the optimal solution, where Rϖ is the value of R for = mu ii ,

= L,,1 ni and Sϖ is the value of S for = Mu ii , = L,,1 ni .

2. Assume that a new node ϖ (defined as above) is such that ϖ < Ub , where U is an upper bound of the criterion. In this case, we know that the optimal solu-

tion of the problem may belong to bϖ . Furthermore, if we compute the value

⎡ + Mm ii ⎤ 2 U * of Criterion C.3 for ui = ⎢ ⎥ (see ) and if * < UU , then U * is a ⎢ 2 ⎥ better upper bound of the optimal solution than U . The following algorithm translates the B&B approach to the problem at hand.

Algorithm C.1.

1. Building the initial level.

1.1. Set N = 1 . The value of this variable is the number of nodes (i.e., of subsets) at the level under consideration. 0,1 0,1 1.2. Set mi =1 and = nM ii for = L,,1 ni ; this is the initial set of feasible solutions. ⎡ 0,10,1 ⎤ + Mm ii 1.3. Compute the value of Criterion C.3 for ui = ⎢ ⎥ and assign the result to U , ⎢ 2 ⎥ upper bound of the criterion.

2. Iterations:

2.1. Set r = 0 . This variable will contain the number of “active” nodes at the level under consideration. 2.2. For = L,,1 Nk do:

2 Recall that ⎡⎤a is the smallest integer greater than or equal to a . 490 C Branch-and-Bound Method

2.2.1. Set r r += 1. ⎢ k k 0,0, ⎥ r k 0,1, r 1, i + Mm i 3 2.2.2. Set i = mm i and M i = ⎢ ⎥ (see ) for = L,,1 ni . ⎣⎢ 2 ⎦⎥ r 1, 2.2.3. Compute bi , the lower bound of the criteria of the solutions that belong to the subset defined in Step 2.2.2. (see also remark 1 above). r 1, 2.2.4. If i < Ub do: (the objective is to update the upper bound) ⎡ r r 1,1, ⎤ i + Mm i 2.2.4.1. Compute the value U * of the criterion for ui = ⎢ ⎥ for ⎢ 2 ⎥ = L,,1 ni . 2.2.4.2. If * < UU , then set = UU * . r 1, 2.2.5. If i ≥ Ub then do r = r −1 . 2.2.6. Set r r += 1. ⎡ k k 0,0, ⎤ r 1, i + Mm i r k 0,1, 2.2.7. Set mi = ⎢ ⎥ and i = MM i for = L,,1 ni . ⎢ 2 ⎥ r 1, 2.2.8. Compute bi , the lower bound of the criteria of the solutions that belong to the subset defined in Step 2.2.7 (see also remark 1 above). 2.2.9. Repeat Steps 2.2.4 and 2.2.5. k k 1,0, k k 1,0, 2.3. For = L,,1 rk do i = mm i and i = MM i for = L,,1 ni . 2.4. Set = rN . 0,10,1 0,1* 2.5. If N = )1(( and ( = ii = L niMm )),,1for , then = mu ii for = L,,1 ni is the op- timal solution; Otherwise go to 2.2.

C.3.1.3 Numerical Experiments

Consider the case of 3 components. The maximum lead times of these components are, respectively, equal to 8, 7 and 6 elementary periods. The associated to the different lead times are presented in Table C.1. The inventory costs per period are 10, 5, and 15 for the components 1, 2, 3, respectively. Furthermore, the backlogging cost is 0.05 per period. We apply the algorithm to this example. The initial set of solutions is { { L } { L } { L 6,,1,7,,1,8,,1 } }.

The initial upper bound is computed with 1 = 2 = uuu 3 = 3,4,4 . It is equal to: U = 7857.71 .

3 ⎣⎦a is the greatest integer less than or equal to a . C.3 Applications of Branch-and-Bound 491

Table C.1 Lead time probabilities for each component

Period 1 2 3 4 5 6 7 8

Component 1 0.2 0.3 0.4 0.02 0.02 0.02 0.02 0.02

Component 2 0.03 0.05 0.1 0.2 0.3 0.3 0.02

Component 3 0.4 0.5 0.03 0.03 0.02 0.02

At the second iteration, we consider × × = 8222 subsets. Only 5 subsets have a lower bound smaller than U . They are presented in Table C.2. According to the results presented in column 4, the new upper bound is equal to: U = 29.45 . Among the 5 subsets reviewed at iteration 2, only 3 of them have a lower bound less than the new upper bound (see Table C.2). These 3 subsets are:

{}{}L { L }{L 3,,1,7,,4,4,,1 } , {{ L } { L } { L 3,,1,7,,4,8,,5 }} and {}{}L { L }{L 6,,4,7,,4,4,,1 } .

As a consequence, we will have to consider × × × = 242223 subsets at itera- tion 3. Only 7 subsets among them will display a lower bound less than U . They are presented in Table C.3. As can be seen in this table, the new upper bound will be U = 6.35 . Among the 7 subsets reviewed at iteration 3, only 2 of them have a lower bound less than the new upper bound (see Table C.3).

Table C.2 Second iteration

Subsets at iteration 2 Lower Solution A selected Criterion bound to update the upper value for bound A

{ {1, …, 4 }, {1, …, 3 }, {1, …, 3 } } 57.18 2, 2, 2 96.21

{ {1, …, 4 }, {4, …, 7 }, {1, …, 3 } } 11.10 2, 5, 2 45.29

{ {5, …, 8 }, {4, …, 7 }, {1, …, 3 } } 30.56 6, 5, 2 61.36

{ {1, …, 4 }, {4, …, 7 }, {4, …, 6 } } 40.91 2, 5, 5 82.79

{ {5, …, 8 }, {4, …, 7 }, {4, …, 6 } } 60.1 6, 5, 5 97.82

492 C Branch-and-Bound Method

Table C.3 Third iteration

Subsets at iteration 3 Lower Solution A selected to Criterion bound update the upper value for A bound

{ { 1, …, 2 }, { 4, …, 5 }, { 1, …, 1 } } 43.26 1, 4, 1 64.52

{ { 3, …, 4 }, { 4, …, 5 }, { 1, …, 1 } } 41.72 3, 4, 1 55.37

{ { 3, …, 4 }, { 6, …, 7 }, { 1, …, 1 } } 42.97 3, 6, 1 45.69

{ { 1, …, 2 }, { 4, …, 5 }, { 2, …, 3 } } 39.17 1, 4, 2 67.16

{ { 3, …, 4 }, { 4, …, 5 }, { 2, …, 3 } } 32.87 3, 4, 2 53.67

{ { 1, …, 2 }, { 6, …, 7 }, { 2, …, 3 } } 39.48 1, 6, 2 64.23

{ { 3, …, 4 }, { 6, …, 7 }, { 2, …, 3 } } 29.40 3, 6, 2 35.60

These 2 subsets are:

{}{}L { L }{L 3,,2,5,,4,4,,3 } and { { L } { L } { L 3,,2,7,,6,4,,3 } }

As a consequence, we will have to consider × × × = 162222 subsets at itera- tion 4. Only 1 subset among them will display a lower bound less than or equal to U , this subset is:

{}{}{}{}L L L 2,,2,6,,6,3,,3

Thus, the optimal solution is u1 = 3 , u2 = 6 and u3 = 2 and the optimal value of the criterion is 35.6.

C.3.2 Assignment Problem

C.3.2.1 Problem Statement

Consider n resources and m tasks. Each task should be assigned to at most one resource and each resource should perform at the most one task. C.3 Applications of Branch-and-Bound 493

Let c , ji be the cost incurred when performing task i using resource j . Fur- thermore, if there are fewer resources than tasks, then each resource must be as- signed to a task and, similarly, if they are fewer tasks than resources, then all the tasks should be executed. We define:

⎧1 if task i is performed with resource j ⎪ x , ji = ⎨ ⎩⎪ 0 otherwise

With this definition, the problem can be written as follows:

m ⎡ n ⎤ Minimize ∑∑⎢ xc ,, jiji ⎥ (C.4) i==11⎣⎢ j ⎦⎥ subject to:

m ∑ x , ji ≤ 1 for = L,,1 nj (C.5) i=1

n ∑ x , ji ≤ 1 for = L,,1 mi (C.6) j = 1

m ⎡ n ⎤ ∑∑⎢ x , ji ⎥ = (),Min nm (C.7) i==11⎣⎢ j ⎦⎥

Expression C.4 is the total cost to be minimized. Constraints C.5 express that each resource takes care of at most one task. Constraints C.6 are introduced to make sure that a task is assigned to at most one resource. Constraint C.7 guaran- tees that the maximum number of assignments is done. To apply a B&B approach to this problem, we have to:

• Build a B&B tree. One possibility is to choose a variable x ji **, and to build, at

the first iteration, the subset of variables where the value of x ji **, is zero and

the subset where x ji **, =1 . Thus, the initial set of solutions is split up into two subsets. At the next iteration, select another variable and do the same, and so

on. We can also select 2 variables x ji **, and x ji ***,* and generate 4 subsets at the first iteration: 494 C Branch-and-Bound Method

– subset 1 is characterized by x ji **, = 0 and x ji ***,* = 0 .

– subset 2 is characterized by x ji **, = 0 and x ji ***,* =1.

– subset 3 is characterized by x ji **, =1 and x ji ***,* = 0 .

– subset 4 is characterized by x ji **, =1 and x ji ***,* =1. Do the same for each subset that remains candidate whatever the iteration. • Define an upper bound that can be updated at each iteration. Initially, a solution

can be to assign task i1 to resource j1 , where c , ji = Min c , ji , then task i2 to 11 , ji resource j , where c = c , and so on until either all the tasks are 2 , ji 22 Min , ji ,, ≠≠ jjiiji 11 assigned to resources or all the resources to tasks. If the iteration is not the first one and we want to refresh the upper bound, then the same process is applied but the values of the variables that define the subset under consideration are kept. • Define a lower bound for a given subset. The lower bound can be found by solving the problem as a real (LP) problem, but the values of the variables that define the subset are frozen.

C.3.2.2 Numerical Example

Consider an example with 3 tasks: ,, TTT 321 , and 4 resources: ,,, RRRR 4321 . The costs are given in Table C.4.

Table C.4 Costs

R1 R2 R3 R4

T1 6 2 4 5

T2 6 3 8 9

T3 4 3 5 9

Iteration 1: Compute the initial upper bound as explained in the previous subsection, and ob- tain successively 2,1 = xx 1,3 = 1,1 and x 3,2 = 8 . Thus, the initial upper bound is =++ 14842 . According to the previous notations, U = 14 .

The first subset is defined by x 1,1 = 0 and x 2,2 = 0 . To find the lower bound, the following linear programming problem (LP) should be solved: C.3 Applications of Branch-and-Bound 495

++ + + 86542(Minimize xxxxx 3,21,24,13,12,1

+++ + + xxxxx 4,33,32,31,34,2 )95349 subject to:

xx 1,31,2 ≤+ 1 , xx 2,32,1 ≤+ 1 , + + xxx 3,33,23,1 ≤ 1 , + + xxx 4,34,24,1 ≤ 1

xxx 4,13,12,1 ≤++ 1 , + + xxx 4,23,21,2 ≤ 1, + + + xxxx 4,33,32,31,3 ≤ 1

+++++ + + + + xxxxxxxxxx 4,33,32,31,34,23,21,24,13,12,1 = 3

Evidently, the variables should be greater than or equal to 0.

The optimal value of the criterion is 13 for = = xxx 2,31,23,1 = 1, the other vari- ables being equal to 0. If an upper bound starting from the definition of this subset is computed, we still obtain U = 14 . Since the lower bound 13 is less than the upper bound, we keep this subset for further consideration.

The second subset is defined by x 1,1 = 0 and x 2,2 = 1 .

If we refresh the upper bound taking into account the fact that x 1,1 = 0 and x 2,2 = 1 , and applying the approach described in the previous section, we obtain x 3,1 = 1 , x 2,2 = 1 and x 1,3 = 1. The other variables are equal to 0. The value of the criterion for this solution is 11. Thus, the new value of the upper bound is U =11 , which rules the previous subset out for further consideration. To find the lower bound, the following LP has to be solved:

++ + xxxx 1,24,13,12,1 + 36542(Minimize

+++ + + + xxxxxx 4,33,32,31,34,23,2 )953498 subject to:

xx 1,31,2 ≤+ 1 , xx 2,32,1 =+ 0 , + + xxx 3,33,23,1 ≤ 1 , + + xxx 4,34,24,1 ≤ 1

xxx 4,13,12,1 ≤++ 1 , + + xxx 4,23,21,2 = 0 , + + + xxxx 4,33,32,31,3 ≤ 1

+++++ + + + + xxxxxxxxxx 4,33,32,31,34,23,21,24,13,12,1 = 2 496 C Branch-and-Bound Method

Again, the variables should be greater than or equal to 0.

The optimal value of the criterion is 11 for = = xxx 1,32,23,1 = 1, the other vari- ables being equal to 0. The lower bound 11 being equal to the current upper bound, we keep this sub- set for further consideration. Indeed, we will discard this subset only if the upper bound decreases on the occasion of future refreshment.

The third subset is defined by x 1,1 = 1 and x 2,2 = 0 .

If we try to refresh the upper bound taking into account the fact that x 1,1 = 1 and x 2,2 = 0 , and applying the approach described in the previous section, we obtain x 1,1 = 1, x 3,2 = 1 and x 2,3 = 1. The other variables are equal to 0. The value of the criterion for this solution is 17. Thus, the value of the upper bound remains U = 11. To find the lower bound, the following LP should be solved:

++ + 4,13,12,1 + 65426(Minimize xxxx 1,2

+++ + + + xxxxxx 4,33,32,31,34,23,2 )953498 subject to:

xx 1,31,2 =+ 0 , xx 2,32,1 ≤+ 1 , + + xxx 3,33,23,1 ≤ 1 , + + xxx 4,34,24,1 ≤ 1

xxx 4,13,12,1 =++ 0 , + + xxx 4,23,21,2 ≤ 1, + + + xxxx 4,33,32,31,3 ≤ 1

+++++ + + + + xxxxxxxxxx 4,33,32,31,34,23,21,24,13,12,1 = 2

Once more, the variables should be greater than or equal to 0.

The optimal value of the criterion is 17 for = = xxx 2,33,21,1 = 1, the other vari- ables being equal to 0. Thus, the third subset will be discarded from future consid- eration.

The last subset is defined by x 1,1 = 1 and x 2,2 = 1 . The optimal solution of the LP problem with real variables is straightforward:

xxx 3,32,21,1 === 1, the other variables being equal to 0. The corresponding value of the criterion is 14 > U . Thus the last subset does not contain the optimal solu- tion.

Iteration 2: At this point, we know that the second subset is the only one that deserves further consideration. Furthermore, the lower bound related to this subset is equal to the C.3 Applications of Branch-and-Bound 497 upper bound that is valid at the end of the first iteration. As a consequence, this so- lution is optimal. For the problem under consideration, the optimal value of the criterion is 11 and the optimal solution is = = xxx 1,32,23,1 = 1 and the other vari- ables are equal to 0.

C.3.3 Traveling Salesman Problem

C.3.3.1 Stating the Problem

A salesman has to visit n shops located in different towns denoted by L,,2,1 n . The objective is to find the shortest path passing once through each town. There are n − )!1( such circuits. We assume that the salesman’s office is located in city 1.

We define a variable x , ji as follows:

⎧1 if town follows ij in the circuit ⎪ x , ji = ⎨ ⎩⎪ 0 otherwise

We also denote by c , ji the distance between towns i and j . Finally, the traveling salesman problem can be expressed as follows:

n ⎡ n ⎤ Minimize ∑∑⎢ xc ,, jiji ⎥ (C.8) i=≠1,⎣⎢ =1 ijj ⎦⎥ subject to:

n ∑ x , ji = 1 for = L,,2,1 nj (C.9) ,1 ≠= jii

n ∑ x , ji =1 for i = L,,2,1 n (C.10) ,1 ≠= jij

x , ji ∈{}1,0 for i = L,,2,1 n and = L,,2,1 nj (C.11) 498 C Branch-and-Bound Method

Expression C.8 shows that the objective is to minimize the length of the circuit. Constraints C.9 express that only one town precedes each town in the circuit. Con- straints C.10 state that only one town succeeds each town in the circuit. Finally, Constraints C.11 shown that the problem is a binary linear programming problem.

C.3.3.2 Applying a B&B Approach

B&B Tree We are dealing with a circuit. Thus, we can start with any one of the towns. Let town 1 be the root of the tree. This town can have n −1 possible successors: L,,3,2 n . Thus, we will have n −1 nodes at the second level of the tree. Each node of the second level will give birth to n − 2 nodes at the third level, and so on. Indeed, some nodes are discarded at some levels of the B&B tree due to the relative values of the global upper bound and the local lower bounds of the crite- rion (this is the case of the minimization problem). In the case of a maximization problem, the previous item holds after inverting “upper” and “lower”.

Upper Bound An initial upper bound can be obtained in several ways: 1. Construct a circuit step-by-step, starting with town 1 and selecting as the suc- cessor of a town the closest town among those not yet integrated in the circuit. The length of such a circuit is an upper bound. 2. We can compute a “good” circuit by using a heuristic (a simulated annealing approach, for instance). The length of this circuit is an upper bound. Note that using a simulated annealing approach usually leads to a near-optimal solution. 3. Generate at random several circuits and keep the best (or shortest one). The length of this circuit is also an upper bound. These approaches can be applied to refresh the upper bound at different levels of the tree. Another strategy to refresh the upper bound is the so-called in-depth strategy. It consists of partitioning one of the subsets obtained, then partitioning one of the last subsets obtained, and so on, until we reach a subset containing only one ele- ment: at this point, we have a circuit the length of which is an upper bound. Dif- ferent ways exist to select the node (i.e., the subset) from which an in-depth strat- egy is developed: lowest lower bound at the level under consideration, first subset generated at this level, etc. C.4 Conclusion 499

Computing Lower Bounds We obtain a lower bound of the criterion for the solutions belonging to a subset by: • freezing the values of the variables that define the subset; • solving the linear programming problem after relaxing the binary constraints that apply to the variables that do not define the subset.

C.4 Conclusion

The goal of this appendix was to introduce the basis of the B&B approach. Three important aspects have been highlighted (we have restricted ourselves to the minimization problem): • The design of the B&B tree. • The computation of an upper bound that could be refreshed based on the infor- mation contained in the current subsets. Usually, an upper bound is computed either by generating one or more solutions at random and keeping the best one or by applying a heuristic algorithm. • The computation of a lower bound in each subset. Usually, a lower bound is obtained by computing the optimal solution of the problem on the subset under consideration but after relaxing some constraints. Also mentioned was the in-depth strategy that consists in developing a branch of the tree until a leaf is reached. This provides an upper bound of the optimal so- lution. Several ways have been indicated to select the node from which the in- depth strategy is developed. Note that the examples proposed in this appendix are only a small subset of the applications available in the literature. Other possible examples are: • the knapsack problem; • non-linear programming problems; • the quadratic assignment problem; • line-balancing problems; • lot-sizing problems. The main drawback of the B&B method is that it is impossible to predict the computation burden to reach the optimal solution. If the bounds are far from the optimal criterion value, then we may be forced to enumerate most or all of the so- lutions to find the optimal. 500 C Branch-and-Bound Method

C.5 Recommended Reading

Agin N (1966) Optimum seeking with Branch-and-Bound. Manag. Sci. 13(4):176–185 Baker KR (1974) Introduction to Sequencing and Scheduling. John Wiley & Sons, New York, NY Balas E, Ceria S, Cornuéjols G (1996) Mixed 0-1 programming by lift-and-project in a branch- and-cut framework. Manag. Sci. 42(9):1229–1246 Barnhart C, Johnson EL, Nemhauser GL, Savelsbergh MWP, Vance PH (1998) : for solving huge integer programs. Oper. Res. 46:316–329 Bazaraa MS, Shetty CM (1979) . Theory and Algorithms. John Wiley & Sons, New York, NY Brucker P, Hurink J, Jurisch B, Wöstmann B (1997) A algorithm for the open- shop problem. Discr. Appl. Math. 76:43–59 Clausen J, Trïa JL (1991) Implementation of parallel branch-and-bound algorithms – experiences with the graph partitioning problem. Ann. Oper. Res. 33(5):331–349 Climaco J, Ferreira C, Captivo ME (1997) Multicriteria : an overview of different algorithmic approaches. In: Climaco J (ed) Multicriteria Analysis. Springer, Berlin, pp. 248 – 258 Cordier C, Marchand H, Laundy R, Wolsey LA (1999) bc-opt: A branch-and-cut code for mixed integer programs. Math. Prog. 86(2):335–353 Dolgui A, Eremeev AV, Sigaev VS (2007) HBBA: Hybrid algorithm for buffer allocation in tan- dem production lines. J. Intell. Manuf. 18(3):411–420 Dolgui A, Ihnatsenka I (2009) Branch and bound algorithm for a transfer line design problem: stations with sequentially activated multi-spindle heads. Eur. J. Oper. Res. 197(3):1119–1132 Dowsland KA, Dowsland WB (1992) Packing problems. Eur. J. Oper. Res. 56(1):2–14 Dyckhoff H (1990) A typology of cutting and packing problems. Eur. J. Oper. Res. 44(2):145– 159 Gendron B, Crainic TG (1994) Parallel branch and bound algorithms: survey and synthesis. Op- er. Res. 42:1042–1066 Hendy MD, Penny D (1982) Branch and bound algorithms to determine minimal evolutionary trees. Math. Biosci. 60:133–142 Horowitz E, Sahni S (1984) Fundamentals of Computer Algorithms. Computer Science Press, New York, NY Kumar V, Rao VN (1987) Parallel depth-first search, part II: Analysis. Int. J. Parall. Prog. 16:501–519 Louly MA, Dolgui A, Hnaien F (2008) Optimal supply planning in MRP environments for as- sembly systems with random component procurement times. Int. J. Prod. Res. 46(19):5441– 5467 Martello S, Toth P (1990) Knapsack Problems: Algorithms and Computer Implementations. John Wiley & Sons, New York, NY Mitten LG (1970) Branch-and-bound methods: general formulation and properties. Oper. Res. 18(1):24–34 Mordecai A (2003) Nonlinear Programming: Analysis and Methods. Dover Publishing, Mineola, NY Nemhauser GL, Wolsey LA (1988) Integer and Combinatorial Optimization. John Wiley & Sons, New York, NY Proth J-M, Hillion HP (1990) Mathematical Tools in Production Management, Plenum Press, New York, NY Rao VN, Kumar V (1987) Parallel depth-first search, part I: implementation. Int. J. Parall. Prog. 16:479–499 Senju S, Toyoda Y (1968) An approach to linear programming with 0-1 variables. Manag. Sci. 15:196–207 C.5 Recommended Reading 501

Sprecher A (1999) A competitive branch-and-bound algorithm for the simple assembly line bal- ancing problem. Int. J. Prod. Res. 37:1787–1816 Sweeney PE, Paternoster ER (1992) Cutting and packing problems: a categorized, application- orientated research bibliography. J. Oper. Res. Soc. 43(7):691–706

Appendix D Method

D.1 Introduction

Contrary to simulated annealing (see Appendix A), tabu search is a method with memory: once a solution has been defined, it is marked as a “tabu solution”, which prevents the algorithm from visiting this solution again for a given number of it- erations. In the simulated annealing method, a solution is selected at random in the neighborhood1 of the current one. One keeps this solution if it is better than the current one, otherwise one keeps it with a probability that decreases with the num- ber of iterations already performed and with the difference between the criterion value of the selected solution and the current one. The tabu search (TS) method is different, three rules apply: 1. The algorithm keeps track of the last N solutions that have been obtained (they constitute the tabu list), and these solutions cannot be revisited: they are tabu. A variation of the tabu list prohibits solutions that have some attributes called “tabu active attributes”. 2. The algorithm selects the best solution in the neighborhood of the current solu- tion. It may select a solution worst than the current one when no better solution exists. In this case, the criterion value of the selected solution must be less than the value of the aspiration function.2 In some situations, this rule may lead to the selection of a tabu solution. 3. When the number of solutions in the neighborhood is too large, the search is limited to a subset of the neighborhood. The design of this subset depends on the type of problem under consideration.

1 The neighborhood of a solution S is the set of solutions obtained by disturbing S “slightly”. The dis- turbance depends on the problem under consideration. 2 An aspiration function could be the product of the best value of the criterion obtained so far by a number greater than 1 (case of a minimization problem), for instance. 504 D Tabu Search Method

The initial solution required to start the tabu search is usually computed using a heuristic. The quality of this initial solution is unimportant. Furthermore, several rules to stop the algorithm are available: • Stop the algorithm when the criterion does not improve for k consecutive solu- tions. The value of the parameter k is provided by the user. • Stop the algorithm when the value of the current criterion is “close” to a known lower bound. • Stop the algorithm when a given number of iterations is reached. This number is provided by the user.

D.2 Tabu

In this section, we present the general tabu search algorithm. Keep in mind that the neighborhood depends on the type of problem under consideration. Also the length N of the tabu list, the aspiration function3 and the criterion to stop the al- gorithm should be defined by the user. We introduce two notions to facilitate the implementation of the tabu algo- rithm: 1. A realizable (or feasible) solution is a solution that satisfies the constraints of

the problem. E0 denotes the set of feasible solutions. 2. An admissible solution is a solution that satisfies a given subset of the con-

straints of the problem. The set of admissible solutions is denoted by E1 .

Indeed ⊃ EE 01 .

xf )( denotes the value of the criterion when ∈ Es 0 . When ∈ \ EEs 01 , then the criterion becomes + sgsf )()( , where sg > 0)( if the goal is to mini- mize the criterion and sg < 0)( otherwise. The function g is chosen by the user. The objective is to prompt the algorithm to leave the subset \ EE 01 and to preferably explore solutions belonging to E0 . Taking into account the previous information, the tabu algorithm can be sum- marized as follows.

Algorithm D.1. (Tabu) Starting the computation:

1. Generate an admissible solution ∈ Es 10 . 2. Set m = 0 . The variable m will contain the rank of the current iteration. 3. Introduce the value of k defined in Section D.1 (first rule to stop the algorithm).

3 To start, it is advised to choose a multiplicative variable greater than 1. D.3 Examples 505

4. Introduce the value of N (length of the tabu list). 5. Set T ∅= . T will contain the set of tabu solutions. 6. Introduce the aspiration function A . This function can be a real value greater than 1 that multiplies the value of the criterion under consideration.

7. Set * = ss 0 .

8. If ∈ Es 00 set = sff 0 )(* , otherwise set 0 += sgsff 0 )()(* .

Iterations:

In the following, sF )( refers to sf )( if ∈ Es 0 and to + sgsf )()( if ∈ \ EEs 01 .

9. While < km do:

9.1. Set = mm +1 .

9.2. Generate a neighborhood sV 0 )( of s0 .

9.3. Select in sV 0 )( the best solution s1 that is not tabu ( 1 ∉Ts ) and such that

1 < sFAsF 0 ])([)( . If s1 does not exist, then stop the computation.

9.4. If 1 < sFsF )*()( set 1 == sFfss 1 )(*,* and m = 0 . 9.5. If the number of solutions in T (tabu solutions) is equal to N , then remove the oldest so- lution from T .

9.6. Set ∪= {}sTT 1 .

9.7. Set = ss 10 .

10. Display s * and f * .

D.3 Examples

Note that, in this book, tabu search has already been applied to a line-balancing problem (see Chapter 7). The following four examples provide an additional in- sight into the use of tabu search.

D.3.1 Traveling Salesman Problem

This problem has been already presented in the previous appendices. A salesman has to visit shops located in n different towns. The objective is to find the shortest circuit passing once through each town. This circuit starts from the salesman’s office and ends in the same office that is located in the n + )1( -th town.

Let d , ji denote the distance from town i to town j or vice versa. To solve this problem using the tabu method, we use the following definitions: 506 D Tabu Search Method

1. The neighborhood of a given solution (circuit) s0 is the set of solutions ob-

tained by permuting 2 towns of the circuit. The number of solutions in sV 0 )( is nn − 2/)1( , thus we can keep the complete neighborhood to find the next “best” solution. 2. Each element of the tabu list is made with the pair of towns that have been permuted with their rank before permutation. Thus each element of the tabu list consists of four integers (assuming that a town is represented by an integer). 3. The aspiration function is the product of the variable by a number greater than 1. In this example the set of feasible solutions is the same as the set of admissible solutions.

D.3.2 Scheduling in a Flow-shop

D.3.2.1 Problem Studied

We manufacture K products denoted by 21 L,,, PPP K . Each product has to visit n machines 21 L,,, MMM n in this order to be completed. The time required to perform the operation of Pi on M j is denoted by t , ji . The order the products are launched in production is also the order they visit the machines. The objective is to find the release sequence that minimizes the makespan, that is to say the difference between the time the last product leaves M n and the time the first product enters machine M1 .

D.3.2.2 Computation of the Makespan when a Release Sequence is Given

Let ki )( be the index of the product of rank k .

The computation is based on the following remark: for a product P ki )( to enter a machine M j , two conditions are necessary: (i) it should have left the machine

M j−1 (if j > 1) and (ii) the product that precedes P ki )( , that is to say P ki − )1( (if any), should have left M j . D.3 Examples 507

As a consequence:

• The time when P ki )( leaves M1 is:

k ki 1),( =Θ ∑ t si 1),( (D.1) s=1

• The time when Pi )1( leaves M j is:

j ),1( ji =Θ ∑ t ),1( ri (D.2) r=1

• When k >1 and j > 1, the time when P ki )( leaves M j is:

),( jki − ΘΘ=Θ − ),(Max + t ),(1),(),1( jkijkijki (D.3)

Finally, the makespan is: Θ ),( nKi

Example Consider a small example involving 6 products and 4 machines. The operation times are given in Table D.1.

Table D.1 Operation times Table D.2 Times products leave the machines

M1 M 2 M 3 M 4 M1 M 2 M 3 M 4

P1 8 2 4 7 P1 8 10 14 21

P2 3 8 6 3 P2 11 19 25 28

P3 10 11 10 9 P3 21 32 42 51

P4 2 15 9 12 P4 23 47 56 68

P5 7 11 8 14 P5 30 58 66 82

P6 11 8 12 11 P6 41 66 78 93

The times products leave the machines, under the assumption that the order products are launched in production is →→→→→ PPPPPP 654321 , are given in Table D.2. The first column is obtained by applying Relations D.1, the first row results from Relations D.2 and the other elements of the table are derived from Relations D.3. 508 D Tabu Search Method

In other words, the elements of the first column (row) of Table D.2 are ob- tained by adding the elements of the first column (row) of Table D.1 up to the po- sition of the element. The first row and the first column being available, the ele- ment of row i and column j in Table D.2 is obtained by adding the element of the same position in Table D.1 to the maximum between the element of row i −1 and column j and the element of row i and column j −1 in Table D.2. Indeed, the rows of Table D.1 must be organized in the order the products are launched in production.

The makespan for this order is 93 (the element of the last row and column in Table D.2).

We then restart the computation for the order:

→→→→→ PPPPPP 645231

We first reorder the rows of Table D.1 according to the release sequence (see Table D.3) and derive Table D.4 from Table D.3 in the same way that Table D.2 has been derived from Table D.1.

With this new order, the makespan is 95.

This example shows that the makespan depends on the order products are set into production. The objective is to find the optimal order, that is to say the order that minimizes the makespan.

Table D.3 Operation times Table D.4 Times products leave the machines

M1 M 2 M 3 M 4 M1 M 2 M3 M 4

P1 8 2 4 7 P1 8 10 14 21

P3 10 11 10 3 P3 18 29 39 48

P2 3 8 6 3 P2 21 37 45 51

P5 7 11 8 14 P5 28 48 56 70

P4 2 15 9 12 P4 30 63 72 84

P6 11 8 12 11 P6 41 71 84 95

In the example presented above, the number of solutions is 6! = 720: it is pos- sible to explore all the solutions to find the best one. In practice, the number of products to schedule in real-life problems may be greater than 50, which means that 50! solutions exist, and this number is greater than ×103 64 : this explains why heuristic algorithms are of utmost importance to solve this problem. D.3 Examples 509

D.3.2.3 Tabu Approach

The neighborhood of a given solution can be defined in several ways. However, we have to keep in mind that the makespan must be computed for each one of the solutions belonging to the neighborhood. As a consequence, the size of the neighborhood should have a reasonable size. For instance, we can decide that the neighborhood is the set of solutions obtained by permuting two products in the current solution. For this case, the size of the neighborhood is KK − 2/)1( . An- other possibility is to define the neighborhood as the set of solutions obtained by permuting two consecutive elements of the current solution. In this case, the size of the neighborhood is only K −1. Indeed, the more constrained the design of the neighborhood, the greater the risk of missing the optimal solution. The length N of the tabu list depends on K (the greater K the greater N ) but there is no rule to derive N from K . Furthermore, an element of the tabu list is made up of the indices and positions of the products that are permuted. Remember also that if N elements are stored in the tabu list, we remove the oldest element of the list before introducing the new one.

Concerning the aspiration function, we suggest replacing sF 1 )( <

sFA 0 ])([ in Step 9.3 of the algorithm by 1 < × sFasF 0 )()( , where a is a real number greater than 1. Furthermore, since \ EE 01 is empty, F can be re- placed by f everywhere in the algorithm.

D.3.3 Graph Coloring Problem

D.3.3.1 Problem Statement

Consider a graph = EVG ),( , where V is the set of vertices (or nodes) and E the set of edges that connect pairs of vertices. The objective is to find a coloring of the vertices with as few colors as possible; the constraint is that two connected vertices (that is to say vertices linked by an edge) should not receive the same color.

In other words, we are looking for a partition of V in K subsets 21 L,,, VVV K that minimizes K and such that:

K ∑ VQ k = 0)( k=1

where VQ k )( is the number of edges having both endpoints in Vk . 510 D Tabu Search Method

D.3.3.2 Application of Tabu Search

Upper Bound on the Number of Colors An obvious upper bound on the number of colors is the number n of vertices. Another solution to obtain an upper bound could be to apply a heuristic algorithm.

Ingredients of Tabu Search Assume that the number K of colors is known. Indeed, < nK .

Each element of the neighborhood PS )( of a partition = { 21 L,,, VVVP K } is obtained as follows: select an edge having both endpoints in the same subset

Vk , then select one of the endpoints, say v , at random and assign it to another subset s , ≠ ksV also selected at random. The number of elements of PS )( is:

K (= ∑ VQPQ k )() k =1 since each edge having both endpoints in the same subset generates one element of PS )( .

If the vertex v is removed from Vk and assigned to Vs then the pair kv ),( is assigned to the tabu list. Indeed, the oldest pair of the tabu list is removed first if the list T is full (i.e., if the number of elements of T is equal to N ). The size N of the tabu list is provided by the user applying a trial-and-error approach. The aspiration function is = PQaPA )()( , where a >1.

In this example the set E0 of feasible solutions is not the same as E1 , the set of admissible solutions since, at Step 9 of the algorithm presented hereafter, the ini- tial partition is obtained by assigning the edges of the graph at random to K dif- ferent subsets.

Graph-coloring Algorithm This algorithm is denoted by COL-T.

Algorithm D.2. (COL-T)

1. Introduce the data that define the graph. 2. Introduce the value of MI . This value is the maximum number of iterations allowed to reach a solution when the number of colors is given. 3. Introduce N that is the length of the tabu list (or the number of elements in the tabu list). D.3 Examples 511

4. Set * = mK . If we assign initially different colors to the vertices, then = nm , or m is the number of colors in the solution provided by a heuristic. 5. Set = mK −1 . 6. Introduce a >1 that reflects the aspiration function = PQaPA )()( . 7. Set T ∅= and t = 0 . T will contain the tabu elements and t the number of elements in T . 8. Set = 0mi , where mi is the counter of the number of iterations for each value of K . 9. Assign at random the vertices of the graph to K subsets. We denote this partition by

= { 21 L,,, VVVP K }

10. While ≤ MImi do:

10.1. If < Nt , then do tt += 1 . 10.2. Set += 1mimi . 10.3. Generate the neighborhood PS )( of P as explained above and compute, for each K ∈ PSp )( , = ∑ VQpQ kp )()( , where VQ kp )( is the number of edges of k=1

∈ PSp )( having both endpoints in Vk .

10.4. Set P {}∈= < ),()(),( ∉TpPQapQPSppH and select *∈ Hp P such that pQ )*( = Min pQ )( . ∈Hp P 10.5. If pQ = 0)*( , then: 10.5.1. Set * = and = p*P*KK . 10.5.2. Set KK −= 1 . 10.5.3. Go to 7. 10.6. If pQ > 0*)( then: 10.6.1. If = Nt remove the oldest element from T .

10.6.2. If p * is obtained by removing vertex v from Vk and assigning it to Vs then the pair kv ),( is assigned to the tabu list.

11. Display K * and P * .

3 7

1 5

4 6

8 2 9

10 11

Figure D.1 An example of a graph 512 D Tabu Search Method

Table D.5 Another definition of the graph from Figure D.1

Vertex 1 2 3 4 5 6 7 8 9 10 11

Adjacent 2,3 1,4, 1,5 2,6,9 3,6,7 4,5,7, 5,6,8 6,7,11 4,6,10, 2,9,11 8,9,10 vertices 10 8,9 11

Example Consider the graph of Figure D.1 (see also Table D.5). It includes 11 vertices de- noted by L 11,,2,1 . The algorithm COL-T was applied to this graph with = 30MI (number of iterations), N = 8 (length of the tabu list) and a = 3 (aspira- tion constant). The result shows that K = 3* , which means that only three colors are enough to color the graph such that any two adjacent vertices are always differently col- ored. The colors assigned to the vertices are denoted by 3,2,1 . The assignments are given in Table D.6.

Table D.6 Assignment of colors

Vertex 1 2 3 4 5 6 7 8 9 10 11

Color 2 3 3 1 2 3 1 2 2 1 3

D.3.4 Job-shop Problem

D.3.4.1 Problem Considered

We consider a set of N machines denoted by 21 L,,, MMM N . They are used to produce K products 21 L,,, PPP K . Each product has to visit a specific sequence of machines. Such a sequence is called a manufacturing process. We denote by t , ji the operation time of product Pi on machine M j . To illustrate the problem, we consider a system where = KN = 3 . The manu- facturing processes are given hereafter. The operation times are put in brackets.

MP 31 )4(: , M 2 )2( , M1 )5(

2 MP 2 )3(: , M 3 )6( , M1 )4(

MP 33 )4(: , M 2 )8( D.3 Examples 513

The objective is to find the order products visit the machines. Since 2 products visit machine M1 , two orders are possible. Similarly, = 6!3 orders are possible in front of machines 2 and MM 3 . The objective is to find the set of orders that minimizes the makespan.

D.3.4.2 Graph Model

We propose a graph model of the problem in Figure D.2.

M3,4 M2,2 M1,5

P1

P2 I M2,3 M3,6 M1,4 O

P3

M3,4 M2,8

Figure D.2 The graph model

I is the input vertex and O the output vertex. The arcs (continuous lines) rep- resent the manufacturing processes. These arcs are called conjunctive arcs and represent the order that operations must be performed on each product. Moreover, changing the direction of a conjunctive arc is not allowed. The dotted lines are dis- junctive edges. Disjunctive edges must be transformed into arcs by fixing the or- der products should visit the machines. Thus, an admissible solution is obtained by transforming each disjunctive edge into an arc. It has been proven that a solution is feasible if the directed graph obtained after transforming edges into arcs does not contain a circuit. Thus, an algorithm that detects a circuit in such a graph is necessary to apply the tabu search approach.

D.3.4.3 Detecting a Circuit in a Directed Graph

The Cycle algorithm presented hereafter detects if a directed graph contains a cir- cuit or not, and provides a circuit for the former. This algorithm is based on the fact that if a directed graph does not contain a circuit then at least one of the verti- ces is without a predecessor. 514 D Tabu Search Method

Algorithm D.3. (Cycle) 1. Set Q = ∅ . 2. Assign to G the set of vertices. 3. While G ≠ ∅ )( do: 3.1. If G contains a node a without a predecessor, then do: 3.1.1. ∪= aQQ }{ . 3.1.2. = \ aGG }{ . 3.1.3. Remove a from the graph as well as the arcs the origin of which is a . 3.2. If all the vertices of G have at least one predecessor, then: 3.2.1. The graph contains at least one circuit. 3.2.2. End of the algorithm. 4. The graph does not contain a circuit. 5. End of the algorithm. As an example, consider the model of Figure D.2 and mark the nodes by 2 in- tegers. The first one is the index of the product. The second is the index of the ma- chine. Furthermore, the disjunctive edges have been transformed into arcs. The re- sulting directed graph is given in Figure D.3.

P1 1, 3 1, 2 1, 1

P2 I 2, 2 2, 3 2, 1 O

P3 3, 3 3, 2

Figure D.3 The directed graph (solution)

The changes in the directed graph when applying Algorithm D.3 are shown in Figures D.4 – D.11. The last state of the graph consists merely of the output vertex. Thus, the di- rected graph presented in Figure D.3 shows a feasible solution. It is easy to verify that if we reverse the direction of arc )]2,2(),2,3[( , the solution represented by the graph is no longer feasible. D.3 Examples 515

1, 3 1, 2 1, 1

2, 2 2, 3 2, 1 O

3, 3 3, 2

Figure D.4 First step of the algorithm

1, 2 1, 1

2, 2 2, 3 2, 1 O

3, 3 3, 2

Figure D.5 Second step of the algorithm

1, 2 1, 1

2, 2 2, 3 2, 1 O

3, 2

Figure D.6 Third step of the algorithm 516 D Tabu Search Method

1, 2 1, 1

2, 2 2, 3 2, 1 O

Figure D.7 Fourth step of the algorithm

1, 2 1, 1 1, 2 1, 1

2, 3 2, 1 O 2, 1 O

Figure D.9 Sixth step of the algorithm Figure D.8 Fifth step of the algorithm

1, 1

2, 1 O 2, 1 O

Figure D.10 Seventh step of the algorithm Figure D.11 Height step of the algorithm

The makespan associated with a feasible solution is found by applying the dy- namic programming approach to the graph representing this solution. Weights associated with vertices I and O are equal to 0. The weights associated with the other nodes are operation times. In the solution (see Figure D.3), the makespan is equal to 30.

D.3.4.4 Application of the Tabu Approach

We first have to define a feasible solution. This is quite easy: each time a machine becomes idle, one of the products waiting in front (if any) is introduced to the ma- chine. Thus, a total schedule can be obtained by simulation and translated into dis- junctive arcs to represent a feasible solution. D.5 Recommended Reading 517

The neighborhood of a solution is obtained by changing the directions of the disjunctive arcs. Thus, the number of elements in the neighborhood is equal to the number of disjunctive arcs. Indeed, a solution belonging to the neighborhood is ei- ther feasible (i.e., belongs to E0 ) or simply admissible (i.e., belongs to \ EE 01 ). The status of a solution is defined by applying Algorithm D.3 (Cycle). If a solu- tion s belongs to E0 , then the criterion is the makespan sf )( . If ∈ \ EEs 01 , then the notion of makespan does not make sense. In this case, we can give to the criterion twice the value of the best criterion obtained so far. The tabu list contains the endpoints of the N last arcs the directions of which have been changed.

D.4 Drawbacks

Some difficulties may arise when tabu search is used: • Defining the length of the tabu list is a tradeoff between the efficiency of the algorithm and the computation burden. This endeavor is often not easy. • Selecting a subset of the neighborhood when the number of elements is too large is difficult in some cases. • The same remark holds when defining the aspiration function.

D.5 Recommended Reading

Aboudi R, Jörnsten K (1992) Tabu search for general zero-one integer program using the pivot and complement heuristic. ORSA J. Comput. 6(1):82–93 Battiti R, Tecchiolli G (1994) The reactive tabu search. ORSA J. Comput. 6(2):126–140 Costa D (1994) A tabu search algorithm for computing an operational time table. Eur. J. Oper. Res. 76:98–110 Crainic TG, Toulouse M, Gendreau M (1997) Toward a taxonomy of parallel tabu search heuris- tics. INFORMS J. Comput. 9(1):61–72 Dell’Amico M, Trubian M (1993) Applying tabu search to the job-shop scheduling problem. Ann. Oper. Res. 41:231–252 Friden C, Hertz A, de Werra D (1989) STABULUS: a technique for finding stable sets in large graphs with tabu search. Computing 42:35–44 Friden C, Hertz A, de Werra D (1990) TABARIS: an exact algorithm based on tabu search for finding a maximum independent set in a graph. Comput. Oper. Res. 17:437–445 Gendreau M, Hertz A, Laporte G (1994) A tabu search heuristic for the . Manag. Sci. 40(10):1276–1290 Glover F (1989) Tabu search, Part I. ORSA J. Comput. 1:190–206 Glover F (1990) Tabu search, Part II. ORSA J. Comput. 2:4–32 Glover F, Kochenberger GA (2002) Handbook of . Kluwer Academic Publishers, Boston Glover F, Lagune M (1998) Tabu Search. Kluwer Academic Publishers, Boston Hertz A, de Werra D (1990) The tabu search : how we used it. Ann. Math. Art. In- tell. 1:111–121 518 D Tabu Search Method

Hertz A (1991) Tabu search for large scale timetabling problems. Eur. J. Oper. Res. 54(1):39–47 Hertz A, de Werra D (1987) Using tabu search techniques for graph coloring. Computing 39:345–351 Li VC, Curry GL, Boyd EA (2004) Towards the real time solution of strike force asset allocation problem. Comput. Oper. Res. 31(12):273–291 Lin S, Kernighan BW (1973) An effective heuristic algorithm for the traveling-salesman prob- lem. Oper. Res. 21:498–516 Lokketangen A, Glover F (1998) Solving zero-one mixed integer programming problems using tabu search. Eur. J. Oper. Res. 106(2–3):624–658 Hanafi S, Freville A (1998) An efficient tabu search approach for the 0–1 multidimentional knapsack problem. Eur. J. Oper. Res. 106(2–3):659–675

Appendix E Genetic Algorithms

E.1 Introduction to “Sexual Reproduction”

Genetic algorithms (GA) use techniques inspired by sexual reproduction. Each cell of an individual contains the same set of chromosomes that are strings of DNA (deoxyribonucleic acid). Such a set defines the whole individual. A chromo- some is made of genes that are blocks of DNA. A gene encodes a characteristic of the individual. A gene has its own position in the chromosome. The set of chro- mosomes is the genome. During the reproduction process a complete new chromosome is formed from the parents’ chromosomes by means of two mechanisms: • Recombination (or crossover) consists in taking parts of the chromosomes of both parents to form the new one. • Mutation consists of changing elements of the DNA. Mutation is mainly caused by errors in copying genes from parents during recombination. The fitness of individuals reflects their efficiency in their environment. The un- derlying idea is that the better the fitness of the parents, the higher the probability of elevated fitness for their descendents. In a , an individual is a feasible or admissible solution.

E.2 Adaptation of Sexual Reproduction to Optimization Processes

Assume that we have to find an optimal solution of a combinatorial problem. In a genetic algorithm, we assign a code to each solution. This code will play the role of a genome and is supposed to characterize the solution in an unambigu- ous manner. Furthermore, the criterion associated to the problem takes a value for each solution (or code) and this value reflects the fitness of the solution. If the so- 520 E Genetic Algorithms lution is just admissible, that is to say if some of the constraints are not satisfied, then the value of the criterion is penalized. The algorithm is inspired by Darwin’s theory of evolution. Assume that a set of solutions (feasible or admissible) have been generated ei- ther at random or by means of a heuristic algorithm. This set constitutes the initial population. Let n be its size. The genetic algorithm can be summarized as follows.

Algorithm E.1. (Genetic algorithm) 1. Select n pairs of solutions in the population. A solution is chosen at random with a probabil- ity that increases with the fitness of the solution. In other words, the probability to choose a solution s is proportional to the criterion sf )( if the objective is to maximize the criterion and to − sfM )( , where M is an upper bound of the criterion, if the objective is to mini- mize the criterion. 2. Generate 2 descendents for each pair by simulating the reproduction process. Thus, the size of the new population is still n . This new population is a generation issued from the previous population. 3. Check if the stopping test holds. If yes, stop the computation, otherwise go to step 1. Several stopping tests are available, such as for instance:

– A fixed number of generations have been reached. – The system has reached a plateau such that successive iterations no longer produce bet- ter solutions. Let us now explore in detail the ingredients of a genetic algorithm.

E.3 Ingredients of Genetic Algorithms

A typical genetic algorithm requires: 1. a genetic representation of the solution (code); 2. a reproduction process; 3. a fitness function to evaluate the solution domain (criterion).

E.3.1 Code

As mentioned before, a code should characterize a solution and lead to the corre- sponding criterion value in an unambiguous manner. The code is a sequence of numbers or characters that belong to a finite set. In other words, a code can be a binary vector, a vector whose elements are integer values or a dynamic structure such as a list or a tree. Let us consider some examples to clarify the concept. E.3 Ingredients of Genetic Algorithms 521

E.3.1.1 Binary Code

Assume that a problem involves integers belonging to the set X = { L 99,,1,0 }. In base 2, the integer 99 is represented by the string ]1100011[ . As a conse- quence, any integer of X can be represented by a string of 7 binary digits in base 2. For example: 1. In base 2, the integer 2 is written ]0100000[ . 2. In base 2, the integer 17 is written ]1000100[ . 3. In base 2, the integer 63 is written ]1111110[ .

E.3.1.2 Code Made with Integers

Assume that several ordinal parameters characterize a solution. For example, as- sume that for the problem under consideration the solutions are characterized by two parameters A and B and that: • A takes the values “low”, “medium” and “high”. • B takes the values “small”, and “large”. In this case we can associate 1 to “low”, 2 to “medium” and 3 to “high”. Simi- larly, we can associate 1 to “small” and 2 to “large”. If a solution is characterized by the pair (“medium”, “small”), then the code is ]1,2[ . Note: We should keep in mind that the code must be large enough to allow crossover and mutation. We also should be able to change “slightly” a code with limited consequence on the value of the criterion, which requires codes that are as large as possible and that cover uniformly all the characteristics of the solutions. It is not the case for the codes presented in this subsection.

E.3.1.3 Code is a List

This situation happens, in particular, when the parameters that define the solution take qualitative values. For example, the code BBLF ],,[ may represent a female with blue eyes and blond hair. Note that, in this case, often to derive the value of the criterion from the code is not straightforward and may require a sophisticated algorithm. It is sometimes difficult to keep the consistency of the code up when the repro- duction process is going on. In other words, it may happen that the code becomes 522 E Genetic Algorithms meaningless for some solutions. For example, the code may contain contradictory or incompatible elements.

E.3.2 Reproduction Process

In this section, we explore the choice of the parents used for the reproduction, re- combination process and mutation processes.

E.3.2.1 Choice of the Parents

Let i = L,,1, nic be the value of the criterion for solution i . Assume also that we are looking for the solution that maximizes the criterion. In this case, we associate to solution i the following probability:

ci pi = n (E.1) ∑ ck k=1

As mentioned before, when the objective is to find the solution that minimizes the criterion, we set:

− cM i pi = n , where M = Max ci (E.2) ∈{}L,,2,1 ni ∑ − cM k )( k=1

Code of parent 1

Code of parent 2

Code of descendant 1

Code of descendant 2

k

Figure E.1 One point crossover E.3 Ingredients of Genetic Algorithms 523

E.3.2.2 Recombination (or Crossover) Process

In this section we provide the simplest recombination processes (i.e., a one-point crossover) and a two-point recombination process. The simplest process is presented in Figure E.1. Let K be the number of elements in the code. We generate at random k on {}L K −1,,2 and: • Construct the code of descendant 1 by concatenating the first k elements of the code of parent 1 with the elements k +1 to K of the code of parent 2. • Construct the code of descendant 2 by concatenating the first k elements of the code of parent 2 with the elements k +1 to K of the code of parent 1. Other recombination processes can be introduced such as, for example, the represented in Figure E.2.

Code of parent 1

Code of parent 2

Code of descendant 1

Code of descendant 2

k l

Figure E.2 Two-point crossover

As mentioned before, the recombination process may lead to a code that is nei- ther feasible nor admissible. Such a situation has already been represented in Sec- tion 7.5.3.3. In this case we can restart the recombination process but, if the prob- ability to reach two feasible codes is very low, we have to introduce more sophisticated processes to reach the codes of the descendants: such a case is shown in the rest of this appendix.

Example 1

Consider the set X = {}L 99,,1,0 introduced before and the binary code with 7 digits. Consider the integers 56, the code of which is ]0001110[ , and 97, the code of which is ]1000011[ . These codes will be combined using the simplest re- combination process described above with k = 3 . The code of descendant 1 is ]1000110[ and corresponds to the integer 49. 524 E Genetic Algorithms

The code of descendant 2 is ]0001011[ and corresponds to the integer 104. As shown, the second descendant does not belong to X . Thus, the result of this recombination process is not acceptable.

Example 2

Consider K products 21 L,,, PPP K that visit successively and in the same order

N machines 21 L,,, MMM N . We denote by t , ji the operation time of Pi on

M j . In the appendix dedicated to the tabu search it was shown how to compute the makespan when the order is given. It is easy to verify that choosing the order products visit the machines as a code of the solution leads to inconsistencies. For example, assume that K = 4 and con- sider 2 orders (i.e., 2 solutions). If the code is the index of the machines:

• If the order of a solution S1 is ,,, PPPP 2341 , then the code is C1 = ]2,3,4,1[ .

• If the order of a solution S2 is ,,, PPPP 4321 , then the code is C2 = ]4,3,2,1[ . If we apply the simplest recombination for k = 2 , we obtain two codes:

C3 = ]4,3,4,1[ and C4 = ]2,3,2,1[

Both codes correspond to unfeasible orders because the same indices appear twice in both codes. Thus, a more sophisticated code is needed. For a given solution, we define:

⎧1 if P precedes P in the order (solution) under consideration ⎪ i j x , ji = ⎨ ⎩⎪ 0 otherwise

The code is defined as follows:

= 2,1 L xxC ,1 K i 1, LL iiii +− 1,1, L xxxx , Ki L K 1, L xx KK −1, ],,,,,,,,,,,,,[

The number of elements in the code is KK − )1( .

For example, the code corresponding to solution S1 is now:

C1 = ]1,1,0,0,1,0,0,0,0,1,1,1[

The new code corresponding to S2 is: C2 = ]0,0,0,1,0,0,1,1,0,1,1,1[ . E.3 Ingredients of Genetic Algorithms 525

The algorithm used to obtain the codes of the descendants can be summarized as follows: 1. First apply the rule:

“If product Pi precedes Pj in the solutions that define both parents, then Pi

precedes Pj in the solutions that define both descendants.” In other words, if digits are equal to 1 in the codes defining both parents, then the digits in the same positions are equal to 1 in the codes of both descendants. Thus, for the above example:

C3 ••••••= • • • ],,,,,,,,,1,1,1[

C4 ••••••= • • • ],,,,,,,,,1,1,1[

2. If x , ji = 1 in a code, then x ,ij = 0 in the same code. Thus, C3 and C4 become:

C3 = •••• • • ],,0,,,0,,,0,1,1,1[

C4 = •••• • • ],,0,,,0,,,0,1,1,1[

3. From now on, we explain the algorithm for the first descendant. The process is the same for the second one. Generate at random on {}1,0 the first digit that has not been defined yet (digit number 5 in this example). Assume that 1 is generated. The code becomes:

C3 = ••• • • ],,0,,,0,,1,0,1,1,1[

4. According to the last digit introduced ( x 3,2 = 1 ), the code can be enriched with

x 2,3 = 0 :

C3 = •• • • ],,0,,0,0,,1,0,1,1,1[

5. Now, apply the transitivity rule:

“If Pi precedes Pj and Pj precedes Pk then Pi precedes Pk .”

This rule can be rewritten as: if, in the code, x , ji = 1 and x ,kj = 1 then x ,ki = 1 .

We apply this rule to the last 1-digit introduced, which is x 3,2 in our case.

Since none of the x ,3 • is equal to 1, the transitivity rule does not apply.

Going back to Step 3, we generate at random the value of x 4,2 . Assume that 0

is obtained. Then, assign 0 to x 4,2 and 1 to x 2,4 : 526 E Genetic Algorithms

C3 = • • ],1,0,,0,0,0,1,0,1,1,1[

Since x 2,4 = 1 and x 3,2 = 1 , then set x 3,4 = 1 :

C3 = • ]1,1,0,,0,0,0,1,0,1,1,1[

Since x 3,4 = 1 , then x 4,3 = 0 . Finally, the code of the first descendant is:

C3 = ]1,1,0,0,0,0,0,1,0,1,1,1[

E.3.2.3 Mutation

A mutation is not always necessary and even not recommend in some circum- stances. For instance, changing one element of a code in Example 2 presented above may transform a feasible code into an unworkable one. We just have to keep in mind that the probability of a mutation should remain very low and that we have to check if a code resulting from a mutation remains feasible. Note: a local optimization algorithm is often used to obtain a mutation.

E.3.3 Criterion

A combinatorial problem usually imposes the criterion, but it may happen that this criterion does not show the required characteristics for an efficient application of a genetic algorithm. This is the case when it is not sensitive enough to small changes in the code. Changing one element of the code should result in a percepti- ble change in the criterion value. For instance, we met this situation in Chapter 7 when a genetic algorithm is used to solve a line-balancing problem: instead of minimizing the number of cells, which evolves in a discrete way, a continuous function was introduced that penalized the station loading according to the number of stations that are concerned. Indeed, this criterion also leads to the minimization of the number of cells. It may also happen that the value of the criterion becomes excessive for a lim- ited number of changes in the code. In other words, some individuals (i.e., solu- tions) lead to criterion values that are well above those corresponding to most of the solutions. In this case, it is advised to replace the criterion K with )( = KKf α where α < 1. E.5 Examples 527

E.4 Preparing the Use of a Genetic Algorithm

The following decisions should be made when using a genetic algorithm.

E.4.1 Which Test Should Be Used to Stop the Search?

Usually one of the following tests is used: • The user is required to define the number of iterations to be done and the search is stopped when this number is reached. • The search is stopped when all the elements of the population are the same. • The system is stopped when a plateau such that a given number of successive iterations does not produce better solutions is reached. Indeed, this list is not exhaustive.

E.4.2 What Should Be the Size of the Population?

The size of the population depends on the number of elements in the code. The size of the population should be greater than the number of elements in the code.

E.4.3 What Should Be the Probability of Mutation?

As mentioned before, a mutation is not always necessary. In particular, mutations can be ignored when randomness is used in the crossover process, as was the case in the second example of Section E.3.2.2. When we can change the elements of a code without violating the constraints that apply to the solutions, the mutation consists of changing one element of the code with a probability that is usually very low (in the order of 0.001 or 0.01).

E.5 Examples

In Chapter 7 we showed how to use a genetic algorithm to solve a line-balancing problem. In this section, two more examples are provided.

E.5.1 Traveling Salesman Problem

This is a scheduling problem since its objective is to find a visiting order among the cities. Thus, the definition of the code and the recombination process will be the same as the ones given in Example 2, Section E.3.2.2. 528 E Genetic Algorithms

The criterion is the length of the circuit and does not pose a problem. Mutation is not used since randomness already occurs when generating the descendants.

E.5.2 Graph Coloring Problem

This problem has been solved in Appendix D using a tabu search approach. Denote by n the number of vertices. The basic problem is to check if K col- ors are enough to color all the vertices of the graph taking into account that any two vertices connected by an edge should be colored differently. For using a ge- netic algorithm, we have to define the following elements: code choice of parents, recombination process and mutation.

E.5.2.1 Code

Assume that the vertices are numbered from 1 to n . Each vertex must be assigned to one of the K subsets. Each one of these subsets represents a color. After as- signing the vertices to the subsets, we obtain = { 1 L,, VVP K } that is a partition of the vertices into K subsets and P is a solution.

The code will be a string < 21 LL ,,,,, kkkk ni > , where ki ∈{ L,,1 K } represents the color of vertex i (or the index of the subset to which i belongs).

E.5.2.2 Choice of the Parents

A partition P (i.e., a solution) being given, we define:

K = ∑ VQPQ k )()( k =1

where VQ k )( is the number of edges having both endpoints in the same subset

Vk . In other words, PQ )( is the number of pairs of connected vertices having the same colors for the partition (i.e., solution) P .

We denote by {}21 L,,, PPP W the population (i.e., a set of solutions) under consideration. The probability to choose Pi as a parent is (see Relation E.2):

− PQn i )( − PQn i )( qi = W = W ∑[]− PQn k )( − ∑ PQWn k )( k=1 k=1 E.6 Concluding Remarks 529 where n is an upper bound of the number of colors: this is the case where each vertex has a different color. Indeed, the objective is to maximize − ( PQn ) . An optimal solution P * is such that PQ = 0*)( : the two endpoints of a edge are not in the same subset (i.e., two vertices connected by a edge are not colored identically).

E.5.2.3 Recombination Process

For this problem, we have just to apply the simplest recombination (see Figure E.1) or the two-point crossover (see Figure E.2).

E.5.2.4 Mutation

For this problem, there is no risk to reach an inadequate code when applying a mu- tation. A mutation process can be summarized as follows.

Algorithm E.2. (Mutation)

1. Decide first at random if a mutation should be applied. The probability to apply a mutation should not exceed 0.01. 2. If the mutation is decided, then:

2.1. Choose at random an element e of the code. 2.2. Choose at random ∈{}L,,1 Kr . 2.3. Set e = r .

E.6 Concluding Remarks

For the use of genetic algorithms: • It is sometimes not easy to define a code that fits with the genetic approach. Such a code should neatly cover the characteristics of a solution, but it should also contain a number of elements large enough to facilitate recombination. • The recombination should preserve, with a reasonable probability, the consis- tency of the codes of the descendants. In other words, the codes resulting from recombination of parents should lead to codes that represent solutions. • The criterion should be sensitive to small changes in the code. Furthermore, the values of the criterion for all the possible codes (i.e., solutions) should remain inside reasonable limits; otherwise a correction is introduced as mentioned in Section E.3.3. 530 E Genetic Algorithms

E.7 Recommended Reading

Alander J (1995) An indexed bibliography of genetic algorithms in manufacturing. In: Chambers L (ed) Practical Handbook of Genetic Algorithms: New Frontiers, Vol. II. CRC Press, Boca Raton, FL Biegel J, Davern J (1990) Genetic algorithms and job-shop scheduling. Comput. Ind. Eng. 19:81–91 Borisovsky P, Dolgui A, Eremeev A (2009) Genetic algorithms for a supply management prob- lem: MIP-recombination vs greedy decoder. Eur. J. Oper. Res. 195(3):770–779 Chu C, Proth J-M (1996) L’Ordonnancement et ses Applications. Sciences de l’Ingénieur, Mas- son, Paris Davis L (ed) (1991) Handbook of Genetic Algorithms. Van Nostrand Reinhold, New York, NY Dolgui A, Eremeev A, Kolokolov A, Sigaev V (2002) Buffer allocation in production line with unreliable machines. J. Math. Mod. Alg. 1(2):89–104 Falkenauer E (1993) The grouping genetic algorithms: Widening the scope of GAs. JORBEL 33(1–2):79–102 Falkenauer E (1996) A hybrid grouping genetic algorithm for bin packing. J. 2:5–30 Falkenauer E (1998) Genetic Algorithms for Grouping Problems. John Wiley & Sons, Chiches- ter, England Garey MR, Johnson DS (1979) Computers and Intractability: A Guide of the Theory of NP- Completeness. Freeman, San Francisco, CA Goldberg DE (1989) Genetic Algorithms in Search, Optimization and Machine Learning. Addi- son-Wesley, Reading, MA Hill T, Lundgren A, Fredriksson R, Schiöth HB (2005) Genetic algorithm for large-scale maxi- mum parsimony phylogenetic analysis of proteins. Bioch. Bioph. Acta 1725:19–29 Holland JH (1975) Adaptation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor, MI Homaifar A, Qi CX, Lai SH (1994) Constrained optimization via genetic algorithms. Simulation 62(4):242–253 Koza J (1992) Genetic Programming. MIT Press, Cambridge, MA Laporte G (1992) The Traveling Salesman Problem: an overview of exact and approximate algo- rithms. Eur. J. Oper. Res. 59(2):231–247 Mühlenbein H (1997) Evolutionary Algorithms: Theory and Applications. In: Aarts E, Lenstra JK (eds) Local Search in Combinatorial Optimization, John Wiley & Sons, New York, NY Rubinovitz J, Levitin G (1995) Genetic algorithm for assembly line balancing. Int. J. Prod. Econ. 41:343–354 To CC, Vohradsky J (2007) A parallel genetic algorithm for single class pattern classification and its application for gene expression profiling in Streptomyces coelicolor. BMC Genomics 8:49 Wang S, Wang Y, Du W, Sun F, Wang X, Zhou C, Liang Y (2007) A multi-approaches-guided genetic algorithm with application to operon prediction. Art. Intell. Med. 41(2):151–159

Authors’ Biographies

Prof. Alexandre Dolgui is the Director of the Centre for Industrial Engineering and Computer Science at the Ecole des Mines de Saint-Etienne (France). The principal research of A. Dolgui focuses on manufacturing line design, production planning and supply chain optimization. The main results are based on exact mathematical programming methods and their intelligent coupling with heuristics and metaheuristics. He has coauthored 4 books, edited 11 additional books or con- ference proceedings, and published about 105 papers in refereed journals, 15 book chapters and over 250 papers in conference proceedings. He is an Area Editor of the Computers & Industrial Engineering journal, an Associate Editor of Omega– the International Journal of Management Science and IEEE Transactions on In- dustrial Informatics. A. Dolgui is also an Editorial Board Member of 10 other journals such as Int. J. of Production Economics, Int. J. of Systems Science, J. Mathematical Modelling and Algorithms, J. of Decision Systems, Journal Eu- ropéen des Systèmes Automatisés, etc. He is a Board Member of the International Foundation for Production Research; Member of IFAC Technical Committees 5.1 and 5.2. He has been a guest editor of Int. J. of Production Research, European J. of Operational Research, and other journals, Scientific Chair of several major events including the symposiums INCOM 2006 and 2009. For further information, see www.emse.fr/~ dolgui

Prof. Jean-Marie Proth is currently Consultant, Researcher and an Associate Edi- tor of the IEEE Transactions on Industrial Informatics. He has been Research Di- rector at INRIA (National Institute for Computer Science and Automation), leader of the SAGEP (Simulation, Analysis and Management of Production Systems) team and the Lorraine research centre of INRIA, Associate Member in the Labora- tory of Mechanical Engineering, University of Maryland, University Professor in France, as well as at the European Institute for Advanced Studies in Management (Brussels). He has carried on close collaboration with several US universities. His main research focuses on operations research techniques, Petri nets and data analysis for production management, especially facility layout, scheduling and supply chains. He has authored or coauthored 15 books (text books and mono- graphs) and more than 150 papers in major peer-reviewed international journals. He is the author or coauthor of about 300 papers for international conferences and 532 Authors’ Biographies

7 book chapters, the editor of 8 proceedings of international conferences, 55 times an invited speaker or invited professor throughout the world. J.-M. Proth was the supervisor of 28 PhD theses in France and the USA. He was also a coeditor of the journal Applied Models and Data Analysis. He was an Associate Editor of IEEE Transactions on Robotics and Automation (1995–1998). He was guest editor of Engineering Costs and Production Economics and several other refereed international journals. He has been the Chairman of the Program Committee of various international conferences, an officer of several professional societies, for example: International Society for Productivity Enhancement, Vice-President of Flexible Automation, 1992–1995. For further information, see proth.jean- marie.neuf.fr/

Index

5 Branch and bound (B&B), 244, 251, 255, 5S, 221, 224 397, 483 Bucket brigades, 278, 311, 312, 316, 323 Bullwhip effect, 109, 115, 117-121, 157, A 158, 185 Adjustable strategy, 5 – factors, 119 Agile manufacturing system (AMS), 195, Buyer, 59, 77-82, 84, 94, 99-107, 114, 158 196, 203, 208 Agility (Measure of –), 203 AIDCS (automatic identification, data cap- C ture and sharing), 169 Capacity extension scheduling, 211 Amplification, 119 Capacity requirements planning (CRP), Anticollision, 168 139 Assembly line balancing (ALB), 240, 277, Capital placement, 474, 476, 481 Cluster, 1, 6, 29, 30, 32, 171, 378-384 280, 307, 322 Circuit (in a directed graph), 460, 513, 514 Particular constraints in –, 307 Code, 187, 261-263, 265-268, 519-529 Assembly systems, 154, 323, 354, 421, Binary –, 521, 523 425 – made with integers, 521 Mixed-model –, 290 List –, 521 Assignment Competition, 4, 17, 28, 35, 42, 50, 103, – with adjustments, 359 105, 106, 195, 196, 207, 329, Real-time –, 327, 330, 331, 345, 346, 372, 415 355, 358, 359 Imperfect –, 4, 50 Authentication, 190 Perfect –, 4 Auto-ID center, 169 COMSOAL, 237, 243-251, 257-260, 270, Automated storage and retrieval systems 273, 280, 282, 288, 295, 300, (AS/RS), 425, 427 310, 318, 320, 322 Automatic identification, 163, 169 Conflict, 331, 337-341 Conjoint measurement, 1, 6, 20, 21, 26, 37 Constraint, 5, 22, 25, 43, 44, 45, 87, 88, B 90, 91, 93, 102, 106, 111, 140, Balancing of the manufacturing entities, 148, 164, 167, 168, 198, 204, 402 212, 223, 229, 243, 254, 255, Bar code, 163–166, 180, 183, 185, 187, 258, 261, 277, 307, 309, 321, 423, 424, 429, 430 322, 328, 337, 358, 402, 403, Base stock, 109, 130, 154, 156, 158 449, 451, 452, 461, 468, 469, Bayes’ theorem, 72 477, 493, 498, 499, 504, 509, Bellman principle (see Optimality princi- 520, 527 Budgetary –, 414 ple) Capacity –, 87, 328, 338, 341, 451 BER (block exemption regulation), 182 Demand –, 87 Bill-of-material (BOM), 133 Integrity –, 8, 13, 265 534 Index

Layout –, 394, 398, 401, 402, 410, 412 Inventory – (see also Holding –), 3, Precedence –, 222, 238, 240, 245, 247, 110, 125, 128, 146, 148, 151, 251, 259, 261, 262, 264, 267, 172-174, 411, 434, 469, 487, 328, 338, 366 488, 490 Production –, 308 Labor –, 104, 106, 164, 183, 186, 188, Resource capacity –, 328 Processing time –, 328, 338 191, 197, 202, 411 Ready time –, 328, 338 Maintenance –, 424 Real time –, 364 Manufacturing –, 168, 195, 196, 207, Transportation –, 112 403 Warehouse-sizing –, 438, 439, 442, 444 Marginal –, 3, 51 Convertibility, 208 Ordering – (see Production –) CONWIP, 109, 113, 155, 156, 158 Production – (i.e. Ordering –), 1, 2, 3, Core business (activity), 80 10, 78, 80, 101, 104, 109, 110, CORELAP (computerized relationship lay- 122, 125, 128, 142, 146, 147, out planning), 371, 394-396 150, 184, 191, 195, 196, 198, Correspondence analysis, 7 206, 207, 209, 221, 419, 422, 469 Correlation coefficient, 19 Rearrangement –, 411, 412, 414, 415 Cost, 5, 7, 9, 10, 11, 16, 36, 37, 45, 49, 51, RFID implementation –, 164 52, 55, 65, 78, 79, 80-83, 85-89, Salvage –, 68, 126 91, 92, 99, 101, 102, 110, 113, Setup –, 81, 110, 123, 140, 145, 151 120-126, 128-132, 138, 140, 144, Shortage –, 110 146, 148, 150, 151, 164-169, Transportation –, 107, 110, 405-409, 173, 174, 185, 186, 188, 189, 422, 443 190, 191, 193, 198, 203, 204- Variable –, 3, 10, 11, 12, 14, 15, 81, 86, 219, 221-223, 225, 237, 278, 89 280, 307-311, 323, 328, 329, – stability, 205 330, 332, 334, 336, 371, 374, Cost-plus method, 1, 15, 16, 37 382, 402-415, 419, 422-424, 429, Counterfeiting, 181, 184, 190, 191 430, 435-439, 442, 443, 460-462, COVERT rule, 336 466, 467, 469, 471, 472, 474, CRAFT (computerized relative allocation 479, 480, 482, 488, 490, 493, 494 of facilities technique), 371, 394, Administrative –, 110 398 Average –, 3, 129, 130, 131, 173, 404, Criterion, 43, 48, 81, 84-86, 88, 89, 91, 92, 488 93, 110, 196, 222, 240-247, 249, Backlogging –, 110, 121, 125, 128, 250, 253-255, 259, 261-265, 268, 173, 174, 487, 488, 490 271, 273, 274, 282, 300, 308, Design –, 196 309, 318, 320, 323, 328-330, Fixed –, 3, 10, 11, 12, 13, 15, 81, 83, 333, 338, 344, 360, 362, 363, 110, 119, 145 384-387, 390, 397, 398, 402-404, Handling –, 404, 411, 412, 414, 423 410, 413, 444, 445, 449, 454, Holding – (see also Inventory –), 110, 456, 457, 483, 485, 486, 489- 123, 139, 140, 143, 145, 146, 492, 495, 499, 503-505, 517, 150, 152, 332, 437, 439 519, 522, 526, 528, 529 Incremental –, 16 Cost –, 89 Index 535

Quality –, 89 Tag –, 168 Traffic –, 384, 390 Diagnosis ability, 209 Weight –, 384, 385 Discount strategy, 1, 7 Critical Discount factor, 111 – function, 101 Dissimilarity index, 378, 379, 382, 391- – mass, 185 393 – point, 183 Distance – path, 222, 223, 478, 479 Communication –, 168 – task, 480 Euclidian –, 29 Cross-decomposition, 371, 377, 378, 384, Distribution 385, 390 Gamma –, 112 – GP method (GPM), 377, 378, 385, Gaussian –, 111, 112, 227, 228, 231, 387, 388 232, 278, 284-288 – with weight criterion, 385 Geometric –, 111 – with traffic criterion, 390 Logarithmic –, 111, 457 Crossover process (see Recombination Poisson – (see Poisson process) process) Duopoly market, 4, 33, 37, 77, 94 Crossover stations, 317, 322 Dynamic pricing, 41, 42, 49, 55, 60, 75 Cumulative density function, 285 Dynamic programming, 126, 141, 142, Currency 147, 149, 150, 212, 413, 414, – exchange, 77 459-462, 466, 481, 516 – options, 82 Backward –, 126, 141, 142, 147, 149, Customization, 209, 210 150, 466 Cycle time (takt time), 196, 221, 223, 237, Forward –, 462-464, 475, 478 238, 240, 242, 246, 247, 250, 252, 253, 254-259, 264, 268-273, 277, 280, 282, 283, 287-289, E 292, 299, 301, 303-307, 313-316, Echelon stock policy, 109, 113, 116, 121, 321, 322, 427, 450 132, 133 Economic order quantity (EOQ), 151 EDI (electronic data interchange), 179 D ELV (end-of-life vehicle) law, 182 Dedicated manufacturing line (DML), 207 EMF (electronic magnetic field), 165 Demand EPC (electronic product code), 165, 185 – curve, 4, 33, 218 Equilibrium – intensity, 60, 61, 70, 71, 75, 421 – point, 12, 13 Design, 49, 79, 83, 102, 113, 122, 164, – profit, 94 185, 196, 198, 203-206, 209, – state, 95-99 219, 221, 223, 225, 227-231, – theory, 106 233, 237, 298, 309, 371, 376- Equipment selection, 223, 277, 278, 308, 378, 384, 403, 404, 419, 420, 310 424, 431, 433, 435, 440, 443, Ergonomics, 204, 207, 433 445, 499, 503, 509 Error handling, 210 – change, 79 EUREKA, 252 536 Index

F Management –, 327, 328, 329 FABLE, 252 Planning –, 437 Fitness function, 262, 520 Rolling –, 71, 75, 133 Flexibility, 5, 79, 99, 106, 158, 164, 196- 198, 202, 204, 207-210, 219, 221, 323, 330, 331, 343, 400, I 401, 413, 425, 432, 434, 435 INRIA-SAGEP method, 371, 394, 397 Flexible job-shop, 200, 201 Installation stock policy, 116, 133 Flexible manufacturing cell (FMC), 199, Integration ability, 208 200 Interrogator (see RFID reader) Flexible manufacturing module (FMM), Inventory 199, 200 – level, 35, 41, 42, 48-56, 61-65, 70-75, Flexible manufacturing system (FMS), 109, 110, 115-118, 125-128, 132, 195-197, 203, 208 134, 136, 137, 140, 142-148, Flexible packing system, 200, 201 150, 151, 177, 183, 195, 196, Flow-shop scheduling, 506 468-471, 475 Forecasting, 111, 121, 158, 188, 191, 203, – position, 127, 130 209, 415 – control, 109, 133, 158, 428 Forrester effect (see Bullwhip effect) – cost, 3, 110, 125, 128, 146, 148, 151, Frequency 172-174, 411, 434, 469, 487, Operating –, 168 488, 490 Fusion, 259

J G Job-shop Gantt chart, 328, 329 – system, 195, 196, 202, 331, 368, 381, GATT, 329 512 Genetic algorithm (GA), 237, 261-263, – scheduling problem, 196, 512 265, 268-270, 273, 274, 430, Flexible –, 200, 201 519, 520, 526-529 Just-in-time (JIT), 112, 204, 221, 225, 239, Graph coloring, 509, 510, 528 318, 421, 427 Granularity, 168, 170 Gravity center, 30-32, 379-383 K Kaizen, 221, 223, 224 H Kanban, 109, 113, 122, 152-158, 225, 156 High-price strategy, 4 Generalized – (GKS), 109, 113, 156, Horizon, 43, 60, 65, 111, 121, 125-128, 158 130, 139, 330, 331, 337, 364, Extended – (EKS), 109, 157, 158 372, 374, 402 Kilbridge and Wester (KW) heuristic, 251 – of a problem, 60, 111, 127, 140, 142, K-mean analysis, 1, 6, 29, 37, 371, 377, 146, 148, 151, 211, 402, 411, 378, 380, 381, 383, 384 437, 468, 471 Knapsack problem, 480, 481, 499 Forecasting –, 121 Kuhn and Tucker algorithm, 44 Index 537

L Productivity –, 271, 305, 411 Lagrangian, 44, 45 Stock –, 171-173 Layout, 210, 220, 316, 319, 320, 371-378, Lot-sizing, 109, 113, 119, 138, 139, 158, 381, 394-401, 403, 404, 405, 499 406, 408, 410-415, 420, 432, Low-price strategy, 5, 9 433, 436, 451 Adaptable –, 372 Basic –, 400, 401 M Cellular –, 372, 374-378 Makespan, 196, 328, 329, 338, 340, 341, Department –, 273, 375, 377, 378 351, 355, 506-509, 513, 516, Dynamic facility – (DFL), 371, 372, 517, 524 376, 403, 410, 412, 415 Manufacturing resource planning (MRP2), Functional department –, 372, 373, 133, 138, 139, 158 375-378 Margin 1, 3, 5, 9-14, 37 – design, 371, 376-378 Market Linear –, 372, 373, 375-378, 400 Duopoly –, 4, 33, 37, 77, 94 Manufacturing –, 371, 372, 403 Monopoly –, 4, 6 Multilinear –, 400, 401 Oligopoly –, 1, 4, 32, 34 Robust – (RL), 371, 372, 403, 405, – segmentation, 1, 6, 29, 35 411, 415 – share, 1, 2, 9, 16, 21, 35, 104 Static facility – (SFL), 372, 404, 414 Mass production, 195-198, 204, 206, 207, U-shaped assembly line –, 316 237, 420, 421 Lean manufacturing system (LMS), 195, Master production schedule (MPS), 134- 196, 203, 207, 218, 219, 220 136, 139 Left-shift, 338 Match-up schedule repair heuristic, 337, Line balancing, 156, 220, 221, 223, 237, 342 240, 241, 270, 271, 277, 280, Material requirement planning (MRP), 297-299, 304, 307, 308, 310, 133, 136-139, 158 318, 322, 323, 373, 377, 378, Maximum deviation coefficient, 273 499, 505, 526, 527 , 208, 209, 403 Mixed-model –, 290, 299 Monte-Carlo method, 65, 66 Linear interpolation, 18, 19, 49 MUDA, 219 Linear production, 153, 347, 364 Mutation, 262, 268-270, 519, 521, 522, Linear programming, 92, 94, 237, 255, 526-529 320, 345, 403, 494, 498, 499 Myopic customer, 37, 41, 75 Loss, 99, 101, 186, 191 Direct –, 186 Financial –, 8, 203 N Indirect –, 186 Nash equilibrium, 4, 94 – of initiative, 101 Neighbor solutions, 70, 259, 260, 274, – of product, 164 397, 398, 410, 413, 450-453, – of profit, 172 455, 456, 503-506, 509, 510, – of revenue, 124 511, 517 – of skill, 107 538 Index

Newsboy (Newsvendor) problem, 122, PERT, 479 123, 158 Phase-lag, 119, 120 Poisson process, 41, 50, 51, 60, 72, 111, 175, 177 Policy O Base stock –, 109, 130, 154 Oligopoly market, (see Market) Echelon stock –, 133 Offshoring, 77, 78 Return –, 120, 158 Operation (task) time, 109, 114, 133, 135, (R, Q) –, 127, 128 136, 138, 154, 155, 188, 198, (s, S) –, 130 222, 238, 240-242, 245, 248, Profit margin (see Margin) Price 249, 251, 252, 254, 256, 257, Equilibrium –, 3 261, 262, 263, 267, 270, 271, – discrimination, 6, 8, 37, 42 277, 278, 280-285, 287-292, 294, – elasticity, 3 296, 299, 300, 302, 304, 306- – skimming, 1, 8, 9, 37 308, 310, 312, 318, 319, 321- – strategy, 4, 5, 9, 32 323, 328, 329, 331-336, 338, – testing, 1, 15, 16 – war, 33, 35 346, 347, 350-353, 355, 358, Pricing 359, 362, 363, 365, 367, 372, Dynamic –, 41, 42, 49, 55, 60, 75 507, 508, 512, 516, 524 Penetration –, 9, 37 Deterministic –, 277, 312, 372 Privacy Gaussian –, 284 – bit, 190 – concern, 163, 188-191 Stochastic –, 277, 278, 282, 307, 318, Product life (PL), 375 323, 358 Production – evaluation, 299 – cost (see Cost) Optimality principle, 459-461, 465 – capacity, 5, 33, 35, 83, 84, 110, 139, Oscillation, 119-121 155, 208, 210 Outsourcing, 77-82, 85, 94, 96, 97, 99, – cycle, 184, 195, 222, 223, 238, 331, 100-107, 328, 422 351-353, 373 – benefits, 77-81 Productivity loss (see Loss) – in China, 100, 101, 104 Profit Equilibrium – (see Equilibrium) – negative effects, 77, 79, 101 Expected –, 124 – process, 77, 80 Profile method, 1, 21, 26, 28, 29 Cons –, 99 Programming Offshore –, 77-79, 80, 99, 104-107, 422 Compromise –, 88, 93, 309, 328 Pros –, 99 Goal –, 88, 92, 328 Linear –, 92, 94, 237, 255, 320, 345, Strategic –, 77, 94 403, 494, 498, 499 Overflow probability, 280, 282 Project management, 477

P Q Packaging, 6, 51, 82, 168, 175, 179, 373, Quality, 2, 4, 6, 21, 78, 79, 81, 82, 84, 85, 400, 419, 422, 429-432, 434 89, 91, 92, 99, 107, 109, 120, Parallel stations, 277, 278, 306, 307, 373 167, 170, 176, 177, 184, 185, Partial schedule, 337 Part-worth, 1, 6, 21-32 188, 196, 202-205, 207-210, 218, Index 539

219, 224, 245, 262, 271, 316, Priority –, 327, 331, 332, 334, 336, 345 323, 327, 344, 360, 371, 422, 423, 425, 427, 429, 430 – control, 82, 185, 323, 422, 423, 430 S – management, 81, 327 Safety margin, 241 Measure of –, 85 SALOME, 252 – risk, 79, 99 SAL (simple assembly line), 237, 240 SALB-1, 237, 240-245, 248, 253-255, 257, 258, 260-262, 271, 274 R SALB-2, 237, 240, 241, 255-258, 271, 274 Real-time decision, 330, 363 Salvage value, 41, 49, 60, 61, 70, 75, 122 Ranked positional weight (RPW), 237, Scheduling 244, 248, 249, 251 Dynamic –, 327, 329, 330, 331, 367 Reactive system, 154 Flow-shop –, 508 Reactivity, 79, 81, 82, 84-86, 109, 158, Job-shop –, 512 367, 372, 422 Predictive-reactive –, 337 Recombination process, 262, 266, 519, Production –, 79 522-524, 527, 528, 529 Reactive –, 331 Reconfigurable manufacturing systems Static –, 327, 329, 337, 367 (RMS), 195, 196, 207, 210, 415 School timetable, 451 Recursive problem, 460 Seiketsu, 224 Reengineering, 78, 223 Seiri, 224 Reproduction process, 262, 264, 519-522 Seiso, 224 Return-on-investments (ROI), 185, 192 Seiton, 224 Revenue, 1-3, 9, 33, 35, 37, 42, 43, 48, 49, Selling curve, 1, 4, 15-17, 37 50-54, 62, 65, 67-70, 74, 122- Shifting/swapping repair heuristic, 337 124, 435 Shitsuke, 224 Expected –, 42, 50-54 Shortest path, 397, 463-465, 497 Marginal –, 3 Shrinkage, 164, 171, 174, 176-178, 186- Salvage –, 122, 123 188, 191 Selling –, 122 Simulated annealing, 67, 68, 70, 237, 259, – management, 1, 9, 37, 42 260, 273, 274, 397, 398, 408, RFID (radio-frequency identification), 410, 412, 430, 449, 450, 452, 112, 163 453, 456, 457, 498, 503 – advantages, 183, 184, 188, 191 Six sigma method, 221, 227, 230, 231 – cost, 164 SKU (stock keeping unit), 171, 428, 429 – reader, 163, 164, 183, 184, 186 Solution Right-shift, 338, 340 Admissible –, 47, 504, 506, 510, 513, Risk-neutral model, 50 519, 521 Rolling horizon, 71, 75, 133 Feasible –, 48, 87, 111, 137, 138, 216, Rough Cut Capacity Planning (RCCP), 259, 260, 345, 398, 399, 410, 412, 449-453, 456, 469, 483, 139 485, 489, 504, 506, 510, 514, 516 Rule SPL (simple production line), 237 Dispatching –, 331, 335-337, 367 540 Index

Station, 152-157, 166, 182, 200-202, 210, Cycle – (see Cycle time) 223, 225, 226, 237-274, 277, Lead –, 81-83, 85, 86, 120-122, 126- 278, 280-284, 287-289, 292-296, 129, 131, 135, 138, 158, 203, 298-312, 316-323, 372-374, 376, 209, 223, 238, 486-488, 490, 491 377, 399, 402, 427, 432, 433, Time-phased gross requirement, 135 450, 451, 526 Total productive maintenance (TPM), 221, Parallel –, 277, 278, 306, 307, 373 225 Stimulus, 20-24, 26 Transponder, 165, 180, 189 Stock Transportation resources, 79, 170, 179, – loss, 171-173 198, 332, 399, 400, 401, 424, 432 – out, 125, 138, 180, 186, 188 Selection of –, 399 – taking, 176 Traveling salesman problem (TSP), 428, Storage equipment, 423, 426, 431-433 430, 450, 497, 505, 527 Subcontractor, 78 Triangular density of probability, 277-279, Supply chain, 2, 109, 110, 112-115, 118- 299 122, 132-134, 157, 158, 163, Two-factor method, 1, 21, 26, 29 164, 169-171, 174, 175, 177, 178, 180, 181, 183-189, 191, 192, 330, 331, 352, 373, 422, U 428, 435, 440, 443, 445 Utility function, 21 Surface acoustic wave (SAW) technology, U-shaped assembly line, 277, 278, 311, 168 316-318, 323 Swap, 338, 341 System – effectiveness, 272 V Pull –, 109, 152, 153, 155, 219 Vendor selection, 77, 81, 82, 84, 99 Vendor–management–inventory (VMI), 121 T Tabu list, 259, 260, 261, 274, 503-506, 509-512, 517 W Tabu search, 237, 259, 273, 274, 503-505, Warehouse 510, 513, 517, 524, 528 Mail order selling (MOS) –, 420, 421 Tag, 163-171, 174, 176, 179-184, 186, Retailer supply (RSW) –, 420, 421 188, 189-191 Spare part (SPW) –, 420, 421, 427 – killing, 189, 191 Special (SW) –, 420, 421 Active –, 164, 165, 167 Unit-load –, 419, 435, 436 Passive –, 165-168, 184, 188 – design, 419, 420, 431 Semi-passive –, 166, 167 – location, 420, 440, 444 Takt time (see Cycle time) – management, 419, 429, 431 Tchebycheff’s polynomials, 285, 286 – management systems (WMS), 431 TDMA (time division multiple access), – sizing, 419, 437, 439 169 – taxonomies, 420 Time – usefulness, 422 Index 541

Warehousing Y – equipment, 427, 432 Yield management (see Revenue manage- – operations, 423 ment) Waste, 182-184, 195, 196, 202, 207, 209, 218, 219, 221, 224, 225 – elimination, 196, 219, 221 Whiplash effect (see Bullwhip effect)