ALGORITHM FOR THE CUTTING STOCK PROBLEM WITH MULTIPLE RAWS AND LIMITED NUMBER OF CUTTING KNIVES

by

PITJAYA TANGTATSWAS

Submitted in partial fulfillment of the requirements For the Master of Science

Advisor: Vira Chankong

CASE WESTERN RESERVE UNIVERSITY

May, 2017 CASE WESTERN RESERVE UNIVERSITY SCHOOL OF GRADUATE STUDIES

We hereby approve the thesis/dissertation of Pitjaya Tangtatswas

Candidate for the degree of Master of Science

Committee Chair Vira Chankong, Ph.D.

Committee Member Mingguo Hong, Ph.D.

Committee Member Evren Gurkan-Cavusoglu, Ph.D.

Date of Defense December 22, 2016

*We also certify that written approval has been obtained for any proprietary material contained therein.

i

Table of Contents

Chapter 1 ...... 1

Introduction ...... 1

1.1 Background and Significance ...... 1

1.2 Research Objective and Contributions ...... 2

1.3 Outline of the thesis ...... 3

Chapter 2 ...... 4

Literature Review ...... 4

2.1 Cutting Stock Problem ...... 4

2.2 Column Generation ...... 6

2.2.1 ...... 9

2.2.2 Method ...... 14

2.2.3 Dynamic Programming versus Branch and Bound ...... 19

Chapter 3 ...... 22

Solution Methods for Solving CSPs ...... 22

3.1 Cutting Stock Problem with Multiple Raws ...... 22

3.1.1 Find the Initial Solution ...... 23

3.1.2 Knapsack Problem ...... 25

3.1.3 Further additional algorithm ...... 26

3.1.4 Example ...... 26

ii

3.2 Cutting Stock with Limited Number of Rolls of Raw Material ...... 33

3.2.1 Example ...... 35

3.3 Cutting Stock Problem with Limited Number of available Cutting Knives ...... 41

3.3.1 Lagrangian Relaxation Review ...... 42

3.3.1 Example ...... 45

Chapter 4 ...... 52

Computational Results ...... 52

4.1 Cutting Stock with Multiple Raw Material ...... 52

4.2 Cutting Stock with Multiple Limited Number of Rolls of Raw Material ...... 55

4.3 Cutting Stock with Limited Number of Available Cutting Knives ...... 56

Chapter 5 ...... 62

Conclusions ...... 62

Bibliography ...... 64

iii

List of Tables

Table 1 ...... 13

Table 2 ...... 14

Table 3 ...... 21

Table 4 ...... 52

Table 5 ...... 53

Table 6 ...... 53

Table 7 ...... 53

Table 8 ...... 54

Table 9 ...... 54

Table 10 ...... 55

Table 11 ...... 55

Table 12 ...... 55

Table 13 ...... 55

Table 14 ...... 57

Table 15 ...... 57

Table 16 ...... 58

Table 17 ...... 59

Table 18 ...... 59

Table 19 ...... 61

iv

List of Figure

Figure 1 ...... 4

Figure 2 ...... 13

Figure 3 ...... 51

Figure 4 ...... 56

Figure 5 ...... 57

Figure 6 ...... 58

Figure 7 ...... 60

Figure 8 ...... 60

v

Acknowledgements

My deepest gratitude to my advisor, Prof. Vira Chankong. He has helped me throughout my research with great patience. He has shared his wisdom and knowledge without reserve. He cares for his students deeply. He always encouraged me to go on and show me the paths that I can take when I was lost.

I am also grateful with my committees, Prof. Mingguo Hong and Evren Gurkan- Cavusoglu, for their patience and supports in overcoming numerous obstacles I have been facing through my research.

Many thanks to Wanchat Theerannaew. He has helped me since I arrived at CWRU. I had a hard time at first because my bachelor's degree is in power system but he tutored me and provided a great insight for me to understand how system and control works.

I would like to thank one of my best friends, Sukrit Sucharitakul, for all the helps he has provided. He always gives me a direct feedback to help me improve myself further in both academic and in general life.

Nevertheless, I am also grateful to all my friends who I did not mention but have come to my defense and provided a great support that day.

Last but not least, I would like to thank my father who always understand how I feel with just a single glance in the eye, my mother who will always be there to give me courage and spiritual energy, my brother who always gives me a smile and joy whenever I met him, my sister who always encourage me, my uncle who helped me greatly in enrolling for this great university.

vi

Algorithms for the Cutting Stock Problem with Multiple Raws and Limited Number of Cutting Knives

Abstract

by

PITJAYA TANGTATSWAS

In this work, we develop algorithms to efficiently deal with the Cutting Stock problem with multiple standard stocks of raw material (raws) and with limited number of cutting knifes. The main application of the cutting stock problem is in “sheet” industries such as steel, textile, paper, rubber, to name only a few. The problem is to cut standard stocks/rolls (raws) of steel sheets, textile rolls, paper rolls, or rubber sheets to appropriate sizes that will meet multiple orders with minimal waste/cost. The problem can be formulated as an LP with an extremely large number of columns. To solve the LP efficiently, we have to employ a technique called “column generation”, which is essentially a knapsack problem, to identify an entering column or to otherwise certify that an optimal solution has been obtained. This research extends the standard cutting stock problem with a single raw to problems with multiple raws (with or without limited number of each raw), and with limited number of cutting knifes. Algorithms to deal with these additional constraints efficiently are developed and tested.

vii

Chapter 1 Introduction

1.1 Background and Significance The Cutting Stock Problem (CSP) is one of the major challenges faced by

“sheet-related” industries such as paper, steel, or rubber industries. Standard sheets/rolls of paper/steel/rubber with standard widths or lengths are usually produced. When multiple orders of different widths or lengths are received, the standard stocks are then cut to fill the orders, while trying to minimize the total costs

If only a single type of standard stock is used, then one can minimize the number of standard stocks used as a surrogate to minimizing cost. The CSP can be formulated as a linear program (LP), typically with an extremely large number of columns. Each column in the coefficient matrix represents a pattern that each standard stock of material can be cut to fill the order. Since, there are generally extremely large number of possible cut patterns, hence the extremely large number of columns in the coefficient matrix in the resulting LP. To solve the CSP efficiently, a technique called

“column generation” was introduced by Gilmore and Gomory (1). At each iteration of the typical simplex method, a linear integer program known as “knapsack problem” is solved either to determine an entering column (and variable) or to certify that there is no variable with negative reduced cost coefficient. The latter case signifies that an optimal solution has been reached.

The major cost in solving the CSP comes from the cost of solving the knapsack sub-problems to identify entering columns. Two methods are often used to solve

1 knapsack problems, namely dynamic programming and a branch and bound algorithm. These techniques will be described in chapter 2.

In this thesis, we will extend the standard cutting stock problem to include two additional practical features. The first feature is when there are more than one type of standard stocks/rolls (raws), each with different widths/lengths, and each with or without limited number. The second feature is when there is a limited number of available cutting knives. The detail on how to solve these extended CSPs will be describe in chapter 3.

1.2 Research Objective and Contributions We will first verify that a branch and bound approach is generally more efficient than dynamic programming in solving the knapsack problems generated by the CSP. The primary goal of this research is to develop efficient algorithms to solve the CSPs with the two extensions described above. In the case of multiple raws, the coefficient matrix will be modified accordingly to accommodate multiple standard widths/lengths of different raws. After applying a decomposition scheme according to different raws, it is possible that more than one columns will have negative reduced cost coefficients, hence more than one possible entering columns in the same simplex iteration of the overall problem. The main idea of the algorithm in this thesis is to utilize all entering columns with negative reduced cost coefficients. This should help reduce computational cost by reducing the number of major iterations. In the case where the number of cutting knives is limited, an additional knapsack constraint will be added to the regular knapsack sub-problem. To solve the two-constraints

2 knapsack problem, we will explore two techniques, namely Lagrangian Relaxation and a modified branch and bound method.

1.3 Outline of the thesis This thesis is divided into five chapters. This first introduction chapter provides the background of the cutting stock problem, states research goal and contributions, and outlines of the remainder of the thesis. Chapter 2 reviews relevant literature used in this thesis. Chapter 3 describes (i) how the standard CSP model is modified to accommodate the various extension cases considered in this research, and (ii) algorithms developed or modified to solve these extended CSPs. Chapter 4 details computational results to show efficiency and effectiveness of the solution methods proposed in Chapter 3. The last chapter provides a summary and conclusions of the thesis.

3

Chapter 2 Literature Review

2.1 Cutting Stock Problem Cutting Stock Problem is a problem of to determine how to cut the desired product from unlimited or limited pieces of stock of various lengths. The materials can be paper, textiles, cellophane, or metallic foil. The stocks of large widths of those materials are called raws. Later, they will be cut into smaller rolls which are called finals. The finals have to be at least equal to the demand and must satisfy all of the constraints such as, number of knives, different raws, limited raws etc. In order to find to most economical way of cutting the existing raws into the desired finals, the cutting patterns are generated for each raw. The optimal solution is to find the number of the patterns used to cut the raws into finals that can satisfy all of the demands and the constraints such that raws used are minimized. In a large scale problem, the patterns, which can be generated, can easily exceed thousands or hundreds of thousands. It would not be wise to test out every possible pattern to find an optimal solution. Thus, the column-generation technique is introduced in order to find a basic feasible solution.

Figure 1 4

To demonstrate this, consider a simple cutting stock problem which has only one type of unlimited raw will be introduced as in picture 1.

Let

 W = raw width (inches)

 m = number of orders that have to be fulfilled

 n = number of possible cutting patterns

th  w i  the i demand width

 bi  the finals of width w i

 c  cost of raw material

aa11 1n  a  A  = a mn matrix which consists of ij which is the  aam1 mn

number of pieces of length wi that can be cut in pattern j

T  X [,,,,,] x12 x xjn x = a column vector of the number of times that

pattern j is used.

In this case, c = 1 because there is only one raw material and the objective of this problem is to use the raw material as little as possible so there is no need to make c larger than 1 to make this problem more complicated. Thus, the master program of this problem is

n (2.1.1) min x j j 1

Subject to

5

n (2.1.2)  AX  aij x j b i for i 1,2, ,m j 1

(2.1.3) x j  0 for jn1,2, , and x  int

Solving this problem is very difficult for 2 reasons. The first reason is that x j are

integers, which makes this problem an NP-hard. However, bi are usually large in real

applications so, the constraint (2.1.3) can be relaxed to xRj   . Now, this problem becomes a problem which can be solved by using simplex method. The solution from this linear programming problem can be rounded up to get a feasible solution for the original problem. This method may not yield an exact optimal solution for this problem because the number of raws needs to be integer.

Nevertheless, the solution from this relaxed problem will give a nearly optimal solution if are large enough. Increasing a few raw by rounding up the relaxed solution will not make a significant impact on the solution. Secondly, n is usually very large. For example, if raw length is 100 inches with about 40 orders of length between

20 to 80 inches, can easily exceed 1 million patterns. Thus, an ingenious way of solving this problem has been suggested by P.C. Gilmore and R.E. Gomory (1) which is called column generation technique. This technique will be discussed in the next sections of this chapter.

2.2 Column Generation In the simplex method, an entering variable, or entering column which are the coefficients of the entering variable, is needed for each iteration but n is too large so the number of non-basic variable is large too. In general, determining the entering

6 variable needs to find a complete set of non-basic variables but it is too labor intensive. The entering variable is the one that has the lowest reduced cost coefficient, c . Therefore, this technique will divide the original problem into two problems, which are master problem and sub-problem. The sub-problem is created in order to generate an entering column for the master problem by finding the non- basic variable which has the lowest reduced cost coefficient. From the above master problem,

n (2.2.1) min x j j 1

Subject to

n (2.2.2)  aij x j b i for i 1,2, ,m j 1

(2.2.3) x j  0 for jn1,2, ,

Let

 B = the basic matrix which, in this case, contains the set of patterns that is

used to cut the raw material

T  c Bm(c12 :c : :c ) , the cost coefficients of basic variables,

 N (a :a : :a ), the non-basic matrix which contains all other j12 j jnm

possible patterns that are not in

 c T (c :c : :c ) ,where j,,, j j are indices of the (n-m) non- N j12 j jnm 12 nm

basic variables, be the coefficients of non-basic variables which equal to 1

in this problem.

7

T  cB = the coefficients of basic variables which also equal to 1 in this

problem.

TT1  y cBB .

TT Now, the reduced cost coefficient will be cj c j  y a j 1  y a j for

j j12,,, j jnm . The naive way to find the most negative c j is to compute N which is impractical because N can be very large and too labor intensive. Thus,

T the most negative c j can be found by generating a vector a (a12 ,a , ,am )that

T can maximize ya 1 to makec j as low as possible. The sub-problem can be formulated as follow

max yaT (2.2.4)

Subject to

m (2.2.5)  wii a W i1

(2.2.6) ai  nonnegative integer for im1, ,

The sub-problem for the cutting stock problem is a knapsack problem. This thesis uses two methods to find an optimal solution for this problem. The first one is called dynamic programming and the other one is a branch and bound technique.

Those two methods will be explained further in the next section of this thesis. Solving the knapsack problem yields an entering column or, in cutting stock problem, a new cutting pattern for B . Now, the regular procedure of the simplex method will act in

ˆ 1 once again. Let’s define the entering column ass B a , the current right hand side 8

ˆ 1 1 x i b B b , the entering basic variable x s , and d B as . Compare the ratio and di choose index, i , that gives the lowest ratio to leave B . Perform the pivot operation to update . Then, iterate back to the column generation method to find a new entering column until the optimal solution is found.

The efficiency of this cutting stock problem depend on how well we can solve the knapsack problem. Thus, we will consider two algorithm. The first algorithm is dynamic programming in section 2.3 and the second one is branch and bound algorithm in section 2.4. In section 2.5 will be the comparison of the two.

2.2.1 Dynamic Programming

Dynamic Programming (2) (3) solves problem is a method that works around the concept of principle of optimality: from any point on an optimal trajectory, the remaining trajectory is optimal for the corresponding problem initiated at that point.

To solve (2.2.4) by dynamic programming, define

 d = the remaining length of raw

 adi () = a number of times which order i is used in this pattern for stage i

when there is length left (equivalent to state ).

 raii() = yaii = value that can decrease the reduced cost coefficient as it

increases (for the simplicity sake, this thesis will call this value, profit, from

now on)

th  gi() a i wa i i = total length used in raw for i order

9

 fdi () = the maximum profit that can be obtained from order i, i 1, , m if the

remaining length of raw is d .

A problem must have the following characteristic to be able to use dynamic programming (3).

 The problem can be divided into stages with a decision which is required or

not at each stage. For the knapsack problem in the cutting stock problem, the

stage is defined by choosing order width in the pattern. For example, the stage

i contains the order i, i 1, , m .

 Each stage has a number of states associated with it. The definition for state is

the information that is needed at any stage to make an optimal decision. For

the knapsack problem, the states at stage are the remaining length of raw, d

, which is integer and varies from 0 to r .

 The decision chosen at any stage describes how the state at the current stage

is transformed into the state at the next stage. In this problem, the decision at

any stage is the pattern which has the highest profit. Decisions will be carried

over to the next stage to help determine the states in the next stage.

 Given the current state, the optimal decision for each of the remaining stages

must not depend on previously reached states or previously chosen decisions.

This means the problem must satisfy the principle of optimality. In this

problem, the principle of optimality can be defined as following: Suppose there

is a pattern, aˆs , that has the highest profit. For the sake of simplicity, let the

highest profit be K . If this were not true, there would be another pattern that

10

was larger than K . Then, this would contradict the fact that aˆs has the highest

profit. The proof that this problem inhibit the principle of optimality will be

given after the explanation of the next characteristic.

 If the states for the problem have been classified into one of m stages, there

must be a recursion that relates the cost or reward earned during stages

i, i 1, , m to the cost or reward earned from stages i1, i 2, , m . The

recursion of this knapsack problem can be generalized to the following

(2.2.1.1) fdm1( ) 0 for all possible state values dD

(2.2.1.2) fi( d ) max{ r i ( a i )  f i1 [ d  g i ( a i )]}

Now, let’s prove that this recursion holds the principle of optimality. As above, suppose there is a pattern, , that has the highest profit. For the sake of simplicity, let the highest profit be . If this were not true, there would be another pattern that was larger than . Then, this would contradict the fact that has the highest profit.

* *** * For example, if the pattern a , which consist of a12,,, a am , is known to have a1 in the pattern, then f( W ) max{ r ( a** )  f [ r  g ( a )]}. Thus, there is no way to find a 1 111 2 1 pattern which has greater profit because that means there is a pattern that has more profit than f[ r g ( a* )] which is not possible. 211

To find an optimal solution for the knapsack problem, all fm () and am ()

must be found. Then (2.2.1.2) is used to determine all fm1() and am1() , continue to

do this recursively until all f2 () and a2 () are found. The next step is to find fW1()

11 and aW1(). Then to get the optimal solution, the backtracking is needed to find the

optimal solution. To do that, fW1() and aW1() are needed. Then, the time, that the order 1 needs to be cut, are known and the remaining length of raw material for order

2 to m is d W g11[ a ( W )]. The next step is to backtrack to find a2{ W g 1 [ a 1 ( W )]} and repeat the process until the optimal solution is found. To illustrate this algorithm better, let’s consider the example from (3).

This can be described succinctly as follow,

 Step 1: [Initialize] Set im and db , which b is knapsack size.

 Step 2: [Fill the tables by using (2.2.1.1) and (2.2.1.2)] Reduce d by 1 until it

reaches 0 then reduce i by 1 and reset . Repeat this until i 1.

 Step 3: [Backtrack] From the bottom of the table, fb*() and xb*(). Compute 1 1

* the remaining space using xb1() and look up the value in the table to find x 2

. Repeat this process until im .

There is a ten-lb knapsack which needs to be filled with different items. How should the knapsack be filled to maximized total profit?

12

Item 1

4 lb

Profit 11

Item 2 Item 3

3 lb, 5 lb

Profit 7 Profit 12

Figure 2

From the notation above, let rx11()11,() xrx 122  7,()12,g() xrx 233  x 311 x  4 x 1 ,

g2 (xx 2 ) 3 1 and g3 (xx 3 ) 5 3 . Define fdi () to be the maximum profit that can be earned from a knapsack which has d -pound left and is filled with type ii, 1, ,3 item. The first step is to compute the sub-problems that are divided into stages from the bottom

up (t  3,2,1respectively) by using (2.2.1.1) and (2.2.1.2) which yields fdi () as following

10 9 8 7 6 5 4 3 2 1 0

3 24 12 12 12 12 12 0 0 0 0 0

2 24 21 19 4 14 12 7 7 0 0 0

1 25

Table 1

13

The following table contains xdi ()

10 9 8 7 6 5 4 3 2 1 0

3 2 1 1 1 1 1 0 0 0 0 0

2 0 3 1 2 2 0 1 1 0 0 0

1 1

Table 2

Now, the backtracking process will begin to find the optimal solution for this

knapsack problem. We have f 1(10) 25 and x 1(10) 1 . Therefore, one type 1 item should be included in the knapsack. After that, we have 10 4 6lbs left for item 2

and 3. The optimal solution of sub-problem is f 2(6) 14 and x 2(6) 2 so two of type 2 item should be included in knapsack too. Finally, we have 6 2(3) 0 left for

type 3 items and the optimal for sub-problem f 3(0) is 0 and x 3(0) 0 . The backtracking process can be look up in the above tables as bolded and italic numbers. Thus, zero of type 3 item is included in the knapsack. Therefore, the optimal solution for this knapsack problem is 25 with x (1 2 0)T .

2.2.2 Branch and Bound Method

Most integer problems (IP) are solved by using branch and bound technique

(3). Branch and bound method based on the idea that if there is nothing worth to find or explore by using lower bound and upper bound, we can ignore that path and try branching on paths which are worth to find. By relaxing the integer constraint to

14 transform the problem to linear programming problem (LP), the optimal solution of the LP relaxation will be the optimal solution for the original problem if that solution can satisfy the integer constraint. Sadly, most of the solution from the LP relaxation cannot satisfy the integer constraint. Thus, that solution becomes the upper bound for the problem because the optimal value of the IP cannot be larger than the optimal value for the LP relaxation. Then pick one of the variables and branch it by one to create additional sub-problems. Pick any sub-problem that has not yet been solved as an LP. If the feasible solution is found and it is in the bound, this solution will become a candidate solution and replace the lower bound. If the solution is found but it is not feasible, continue to branch the variable. If the value of remaining solutions are not larger than the lower bound which means this path is not worth branching for or there is no potential solution left, the sub-problem of that branch can be ignored because the solution is not feasible. Continue until there is no unsolved sub-problem left. The algorithm will conclude that the present candidate solution or the lower bound is the optimal solution for the IP.

A display of all sub-problems that have been generated is called a tree. Each sub-problem is called a node of the tree. Each line which connects two different level nodes of the tree is called an arc. The last levels of the tree are called leaves.

Let define

 yi = profit gain for each cut from order i

 ai = number of cuts from order

 w i  required width for order

15

 W = length of raw material

 k = the index for finding non zero cut

 aaii for all ik1,2, , 1

 aakk1

 S = the best solution found so far

The knapsack sub-problem for the cutting stock problem can be formulated as following

max yaT (2.2.2.1)

Subject to

m (2.2.2.2)  wii a W i1

To make the algorithm more proficient, the sorting of variables by efficiency

is preferred. Let ywii/ be the efficiency of the variable, and reorganize the variables such that

(2.2.2.3) y1/// w 1 y 2 w 2   ymm w

j1 Each leaf a j is calculated from the remaining length of raw ()l  wii a divided i1

by the required length for order j ( wj ) then it needs to be rounded down because this problem is an IP. Thus can be defined as following

j1 (2.2.2.4) aj()/ W w i a i w j for jm1,2, , i1

16

From each leaf that has been explored, the backtrack to the root will begin step by

step by setting k = m . Keep reducing k by 1 until ak  0 . Then, replace ak by ak 1, because a remaining length must be increased first in order to find a new pattern, and

compute ak12,,, a k a m by using (2.3.5). Suppose that the best found solution is

m *** * a12,,, a am so the best profit found so far is S  yii a . This S is the lower bound. i1

Before exploring any further, let examine whether the branch is worth exploring or

not. By (2.3.5), each of has the efficiency ratio at most ywkk/ . Thus,

mm yk1 (2.2.2.5) yi a i w i a i i k 11wk1 i  k 

From the constraint (2.3.4) gives,

m  wii a W i1

km

wi a i w i a i W i11 i  k 

mk wi a i W w i a i (2.2.2.6) i k 11 i 

Replace variables in (2.4.5) with (2.4.6)

mk (2.2.2.7) yk1  yi a i W w i a i i k 11wk1  i 

Which implies that

m k m (2.2.2.8) yi a i  y i a i  y i a i i1 i  1 i  k  1

17

m k k yk1  yi a i  y i a i  W   w i a i i1 i  1wk1  i  1 (2.2.2.9)

Therefore, the best profit that this branch can give is equal to the right hand side of

(2.3.9). Unless it exceeds S , there is no need to explore this branch because there is

*** no chance of improving a12,,, a am . If it is larger than , continue to explore that branch. Replace S with new value of the solution if the value is greater than S . The main logic of this method is to check whether the branch of the tree is worth exploring or not. This method will prevent the unnecessary labor of exploring hopeless branches and move on to the one that seems to be more promising.

The branch and bound algorithm is as following

Step 1: Set Sk0, 0

Step 2: Find the most promising extension of the current branch.

For j k 1, k  2, , m , set

j1 aj()/ W w i a i w j i1

Then replace k by m .

m  Step 3: Test whether if a better solution is obtained or not. If  yii a S , then i 1

m a,,, a a replace S by  yaii and replace by 12 m . i 1

Step 4: Backtrack to the next branch.

a. If k 1, then stop otherwise replace k by k 1.

b. If x k  0, then return to a l otherwise replace x k by x k 1.

18

Step 5: Check whether the branch is worth exploring or not.

kk yk1  If yi a i W  w i a i  M is true then return to step 4; otherwise ii11wk1 

return to step 2.

2.2.3 Dynamic Programming versus Branch and Bound

Both of the above techniques are often used to solve the knapsack sub- problem in cutting stock problem. Firstly, the comparison of these two techniques is needed. Let define

 yi = profit gain for each cut from order i

 a = (,,,,,)a12 a aim a = a set of vector of number of cuts from order for

im1,2, ,

 w i  required width for order for im1,2, ,

 W = length of raw material

 m = number of required orders

Then, the knapsack problem model for each sub-problem of the cutting stock problem can be formulated as following.

max yaT (2.2.3.1)

Subject to

m (2.2.3.2)  wii a W i1

The sampling data are as below

19

 Test Data 1

o W 100

o w [52;29;27;21]

o b [600;600;600;1200]

 Test Data 2

o W 181

o w [21.625;20.5;20;17.25]

o b [90;51;45;11]

 Test Data 3

o W 100

o w [45;36;31;14]

o b [97;610;395;211]

 Test Data 4

o W  91

o w [25.5;22.5;20;15]

o b [78;40;30;30]

 Test Data 5

o W  5600

o w [1380;1520;1560;1720;1820;1930;2000;2050;2100;2140;2150;2200]

o b [22;25;12;14;18;18;20;10;12;14;16;18;20]

20

Time used in Dynamic Time used in Branch and Test Data Number Programming (second) Bound (second) 1 0.0156 0.0024 2 19.1037 0.0124 3 0.0079 0.0009 4 0.0527 0.0019 5 1.8479 0.0187 Table 3 The results obviously shows that branch and bound algorithm is much better than dynamic programming method.

The conclusion is that branch and bound algorithm is better than dynamic programming because dynamic programming needs to generate many more patterns than branch and bound technique to find an optimal pattern. Plus, branch and bound method only explores a branch if and only if that branch can possibly hold a better solution than the one it has already found. Thus, eliminating the need to find some of useless patterns leads to an improvement for the running time of the algorithm. The optimal solution of the master problem from these two technique can be different but

T the optimal value of the solution before rounding up X [,,,,,] x12 x xjn x , a column vector of the number of times that pattern j is used, are the same.

The remaining section of this chapter will focus on the other complication of cutting stock problem such as, multiple raw material, limited raw material and limited cutting knife.

21

Chapter 3 Solution Methods for Solving CSPs

3.1 Cutting Stock Problem with Multiple Raws By adding more type of raw, the master problem is changed.

Let

 p = number of raw type

 ck  cost per piece of raw k

 Wk  length of raw

 nk = number of possible cutting patterns for raw

a11 a 1n a 1( n 1) a 1( n n ) aa1(n n   n  1) 1( n  n   n  n ) 1| 1 1 2 | | 1 2p 1 1 2 p 1 p  A   | | | a a a a a a m1 mnmn1 (1)() 1 mnn 1  2 mnnn ( 1  2  p 1  1)( mnnnn 1  2   p 1  p )

m() n1  n 2   npp 1  n matrix which consists of aij which is the number of

pieces of length wi that can be cut in pattern j

T  X  x,,,,, x x x = a column vectors of the number of times 12 j n1 n 2   npp 1  n

that pattern j is used.

Thus, the master problem can be formulated as following

n n   n n1 n 1 n 2 n12 n   nk 12 p (3.1.1)

min c12 xj  c x j    c k x j    c p x j j1 j  1 n1 j  1 n 1 n 2 nkp j  1 n 1 n

22

Subject to

n12 n   np (3.1.2) AX aij x j b i for i 1,2, ,m j1

(3.1.3) x j  0 for jn1,2, ,

Now, the simplex algorithm can be used to find an optimal solution for this

problem.

3.1.1 Find the Initial Solution

To find a good initial solution means to find near optimal basic feasible solution. In order to find it, we will use a procedure from (4) as a basis. Let

 푅 = the set of 푖푡ℎorder which still need to be fulfilled (so at the first iteration

푅 = {1,2,3, … , 푚}).

 bi = residual demand for finals of width 푤𝑖 (so at the first iteration bbii  )

Firstly, we have to sort 푤𝑖 in descending order (푤1 > 푤2 > ⋯ > 푤푚) and sort

푡ℎ 푡ℎ 푙푘 in descending order corresponding to 푐푘. For each 푗 iteration, let the 푗 column

푇 푎 = [푎1, 푎2, … , 푎푚+푛] of B be

0 푖푓 푖 ∉ 푅

𝑖−1 푎 = (3.1.4) ⌊(푊1 − ∑ 푤푞푎푞)/푤𝑖⌋ 푖푓 푖 ∈ 푅 { 푞=1

The initial solution X will use this pattern until some are satisfied. In other word,

x j will be the smallest of the ratios baii . Therefore, xj a i b i for all 푖 ∈ 푅 and

xj a q b q for at least one qR . Then, delete q in R , replace each remaining b with

23 bi  x j a i and continue to the next iteration. If the raw material, that we are using, runs out, the algorithm will move on to the second cost effective raw material and so on until the feasible initial solution is found.

For illustration, consider one-unlimited raw cutting stock problem (4) with

 W 100

 w  45 36 31 14T

 b  97 610 395 211T

The algorithm above works as follows.

Iteration 1:

 a1 100 45 2 , a2 10 36 0 , a3 10 31 0 , a4 10 14 0

 x1 97 2 48.5

 {1} is deleted from R .

 b2  610 , b3  395 , b4  211

Iteration 2:

 a10, a 2  100 36   2, a 3   28 31   0, a 4   28 14   2

610 211  x2 min , 105.5 22

 {4}is deleted from .

 bb23610  (105.5*2)  399,  395

Iteration 3:

 a10, a 2  100 36   2, a 3   28 31   0, a 4  0 24

 x3 399 2 199.5

 {2} is deleted from R .

 b3  395

Iteration 4:

 a10, a 2  0, a 3  100 31  3, a 4  0

 x4 395 3 131.67

2 0 0 0 4805 0 2 2 0 105.5 Thus, the initial basic feasible solution is B   and x*   . 0003 B 199.5  0 2 0 0 131.67

3.1.2 Knapsack Problem

The detail for solving this has already describe in section 2 so we are going to summarize the method in this section.

∗ Now, we have initial variables, which are 퐵 and 푥퐵. The next part is to formulate knapsack problem for the sub-problem. In this part, knapsack problem must be solved for each raw and pick the one which yields the largest profit z .

TT1 Firstly, y cBB must be found. Now, let

T  c Bm(c12 :c : :c ) is not equal to 0 anymore but its value depends on which

raw type is used for that pattern.

T  ak (a,a12 k k , ,a, jk ,a mk ) = a column vector consists of cutting pattern for

raw k

25

T Now, the reduced cost coefficient will be changed into ck c k  y a k  c k  z k .

Thus, the new sub-problem can be formulated as follows

For raw kth , k 1, ,p

max  zckk  (3.1.5)

Subject to

m (3.1.6)  wlia jk k i1

3.1.3 Further additional algorithm

To improve the simplex method further, one algorithm will be added into the simplex method. It would be a waste to delete the remaining patterns from each raw material. The remaining patterns, or the remaining generated columns, can be used by picking one of the remaining patterns and testing it whether it can improve the solution or not. We need to check whether the reduced cost coefficient of the

TT1 remaining generated column is still negative or not. To test it, new y cBB must

T be found. Then, replace ak with the chosen remaining pattern in zk y a k c k . If new

zk  0 , use this pattern as the next entering column. Then, move on to the next remaining pattern until there is none left.

3.1.4 Example

To make things clearer, the illustration is needed. Let’s consider a cutting- stock problem where

26

 p  3, number of raws type

 c  1.9 1 0.6T , cost for each raws type

 W  10 6 4T , length for each raws type

 m  4 , number of orders

 w  2 3 5 8T , required length for each order

 b  400 350 200 150T , required number for each order

 ni = number of all the possible cutting pattern for raw i

This cutting stock-problem can be formulated as following,

n1 n 1 n 2 n1 n 2 n 3 (3.1.8) min (1.9*xj )  x j  (0.6* x j ) j1 j  n1  1 j  n 1  n 2  1

Subject to

n1 n 2 n 3 (3.1.9) AX  aij x j b i for im1,2, , j1

(3.1.10) x j  0 for j1,2, , n1  n 2  n 3

Firstly, the initial solution must be calculated. Using the algorithm section 3.1.1, the longest raw are used for all orders because some order might have a longer length than raw. The initial solution is as following

1 0 0 5 0 0 3 0 B  and x*  150 100 116.667 50T 0 2 0 0 b  1 0 0 0

27

Now the first iteration can be started. The procedure will be explained in steps.

TT1  Step 1: Solve y c B . Finding B1 can be too labor intensive and time

consuming so this equation will be simplify to yBT  1.9 1.9 1.9 1.9.

Thus, y  0.38 0.63 0.95 1.52T .

 Step 2: Solve the knapsack subproblems for each raw. The subproblem for

th each q raw is as following

(3.1.11) max0.38a1 0.63 a 2  0.95 a 3  1.52 a 4

Subject to

(3.1.12) 2a1 3 a 2  5 a 3  8 a 4  lq for q 1,2,3

Branch and bound algorithm is used to solve this knapsack problem. The

equation (2.4.4) is used. In the first iteration, the initial solution, for q 1, can

be found by

a1 10 / 2 5

21 a2(10  wii a ) / 3  (10  w 1 a 1 ) / 3    (10  (2)(5)) / 3   0 i1

31 a3(10  wii a ) / 5  (10  w 1 a 1  w 2 a 2 ) / 5    (10  10  0) / 5   0 i1

41 a4 (10  wii a ) / 8  (10  10  0  0) / 8  0 i1

28

Thus, the initial solution is a*  5 0 0 0T with the best current value

4 M cii a 1.9 . Next, the test to check whether the branch is worthy to be i1 explored or not is in order. The equation (2.4.9) is used. Begin with k  3 and

reduce it until k 1 with ak  0 is found, which is a1 in this case . Then,

change a1  5 to a1  4 . The test is as following

0.63 1.52 10  8  1.94 3

The test result yields a larger value than M . Thus, the branch may be worth exploring. The next branch can be found by

a2 (10  (2)(4)) / 3  0

a3 (10  (2)(4)) / 5  0

a4 (10  (2)(4)) / 8  0

This branch yields a*  4000T which does not improve our current

solution. Now, reduce a1  4 to a1  3 and test the branch.

0.63 1.14 10  6  1.98 3

The test result yields a larger value than current . Thus, the branch may be worth exploring. The next branch can be found by

a2 (10  (2)(3)) / 3  1

a3 (10  (2)(3)  (3)(1)) / 5  0

29

a4 (10  (2)(3)  (3)(1)) / 8  0

The value of this solution cannot improve our current solution. To explore

further, we begin with k  3 and reduce it until k  2 with ak  0 is found.

Then, we reduce a2 by one. Next, the test to find out whether the path

aa123, 0 is worth exploring further,

0.95 1.14 10  6  1.9 5 which is not larger than the current solution. Thus, this branch is not worth exploring further. Hence, we reduce k further until the next k with is

found. a1 is found and is reduced by one. Testing with a1  2 branch shows that

0.63 0.76 10  4  2.02 3 which is larger than the current solution. Therefore, this branch may be worth exploring. This branch can be computed by

a2 (10  (2)(2)) / 3  2

a3 (10  (2)(2)  (3)(2)) / 5  0

a4 (10  (2)(2)  (3)(2)) / 8  0

The value of this branch is 1.06 which is larger than the current best solution.

Thus, new best solution are a*  2 2 0 0T and M  2.02 . This algorithm will continue until all of the worthy branches are explored. The optimal

30

* T solution for raw number 1 of problem is a1  2 2 0 0 with M  2.02 .

* T The solution for raw number 2 is a2  0 2 0 0 with M 1.27 . Lastly,

* T the optimal solution for raw number 3 is a3  2000 with M  0.76 .

Then, compare the value of each optimal solution with its corresponding cost

to find the desired entering column which has the most negative reduced cost

coefficient (RCC).

q 1: 2.02 1.9 0.12

q  2 : 1.27 1 0.27

q  3: 0.76 0.6 0.16

* T Hence, the entering column is a2  0 2 0 0 .

* T  Step 3: Solving the linear problem Bd a2 , we obtain d  0 0 0.67 0 .

 Step 4: Compare the ratios between x and d but since there is only one value

116.67 in d that is not zero. Then, let t be the lowest ratios. Thus,t 175 0.67

and the leaving column is the third column. The new B , x* , c are

1 0 0 5 150   150  1.9 0 0 2 0 100   100  1.9 B   , x*     , c   0 2 0 0 t  175  1      1 0 0 0 50   50  1.9

TT1 T  Step 5: Solve y c B . After obtaining y  0.38 0.5 0.95 1.52 , test

whether it can improve the solution or not by computing

T * ya1 1.9   0.14 .Thus, this pattern cannot improve the solution any further

31

* T T * and test a3  2000 next. We obtains ya3 0.6 0.16 . Hence, the

* pattern a3 can improve the current solution so will become an entry

* T column. Solve Bd a3 and compare the ratio. We obtain d  0 0 0 0.4

and ratio  inf inf inf 125, so t 125 . Thus, the new B , x* , c are as

following

1 0 0 2 150   150  1.9 0 0 2 0 100   100  1.9 B  , x*     , c   0 2 0 0 175   175  1      1000 t  125  0.6

The second iteration begins

TT1 T  Step 1: Solve y c B . We obtain y  0.3 0.5 0.95 1.6 .

 Step 2: Solve the knapsack subproblem by using branch and bound algorithm.

(3.1.13) max0.3a1 0.5 a 2  0.95 a 3  1.6 a 4

Subject to

2a1 3 a 2  5 a 3  8 a 4  lq for q 1,2,3

The optimal solution for each raw is as following

* T q 1: a1  1 0 0 1 with M1 1.9. RCC 1.9 1.9 0

* T q  2 : a2  0 2 0 0 with M2 1. RCC 1 1 0

* T q  3: a3  2000 with M 3  1. RCC 0.6 0.6 0

These mean the optimal solution for this problem has already been found.

The optimal solution for this example is

32

1 0 0 2 150 1.9 0 0 2 0 100 1.9 B  , x*   and c   . 0 2 0 0 175 1   1000 125 0.6

As you can see above, the fifth step helps reduce the work by using the already found patterns which are tested and used as entering columns.

3.2 Cutting Stock with Limited Number of Rolls of Raw Material In real application, everything is limited. Now, the each type of raw material has a limited quantity. Thus, there will be p more constraints added to the problem.

Let

 vk  number of raw k that can be used, for kp1,2, , .

a11 a 1n a 1( n 1) a 1( n n ) aa1(n n   n  1) 1( n  n   n  n ) 1| 1 1 2 | | 1 2p 1 1 2 p 1 p  | | | a a a a a a m1 mn1 mn (1)() 1 mnn 1  2 mnnn ( 1  2  p 1  1)( mnnnn 1  2   p 1  p ) 1 1 1 | 0 0 0 | | 0 0 0  A   0 0 0 | 1 1 1 | |000 0 0 0 ||| 0 0 0  0 0 0 | 0 0 0 | | 1 1 1

= a ()()m p  n1  n 2   npp 1  n matrix which the first upper half

consists of patterns used for each raw material and the lower half

consists of the new constraint which its coefficient is 1.

 H (,,,,,,,) b1 b 2 bmp v 1 v 2 v = new right hand side which consist of two

column vectors.

33

 B = basic matrix for this cutting stock problem

The new constraints can be formulated as follows

AX H (3.2.1)

Now, the number of basic solution increases from m to mN basic variables.

This make the old not symmetric. Thus, B1 is impossible to be solve because B size is m p m so it is not symmetric. Therefore, p more column needs to be

0 0 0 0   0 0 0 0  added to B to make it symmetric again. Matrix 1 0 0 0 , which consists of 0 1 0 0    0 0 0 0 1 slack variables, will be added to to make it invertible. Now, yT will be divided into

TTT two sections, y(,,,| y1 y 2 ym y m 1 , y m  2 ,, y m  p ) (y|y) M P .- Next complication is to find the reduced cost coefficient. The reduced cost coefficient will be changed to

T ck c k y M a k  y m k  c k  z k  y m k . Thus the knapsack sub-problem will be changed as follows.

For each raw kth , kp1, ,

max  zk y m k c k  (3.2.2)

Subject to

m (3.2.3)  wWia jk k i1

34

Then, this sub-problem can be solve normally by using branch and bound algorithm.

3.2.1 Example

To explain how to solve this complication further, let us consider example from section 3.2.4 with added the quantity for each raw.

 v  360 360 360T = amount of raw left in the inventory.

This cutting stock-problem can be formulated as following,

n1 n 1 n 2 n1 n 2 n 3 (3.2.4) min (1.9*xj )  x j  (0.6* x j ) j1 j  n1  1 j  n 1  n 2  1

Subject to

n1 n 2 n 3 (3.2.5)  aij x j  b i for im1,2, , j1

nq (3.2.6)  xvjq for q 1,2,3 j1

(3.2.7) x j  0 for j1,2, , n1  n 2  n 3

The initial solution is

0 0 3 0 0 0 0 350  1 0 0 0 0 0 0 200 0 1 0 0 0 0 0 133.33  B  0 0 0 1 0 0 0 , x  150 with x5,, x 6 x 7 are slack variables, 0 0 0 1 1 0 0 210  0 1 1 0 0 1 0 26.67  1 0 0 0 0 0 1 10

and

35

T c [0.6 1 1 1.9 0 0 0] .

Let the first iteration begin.

TT1 T  Step 1: Solve y c B . We obtain y [0.33 0.6 1 1.9 0 0 0] .

 Step 2: Solve the knapsack problem by using branch and bound algorithm.

th The subproblem for each q raw is as following

(3.2.8) max0.33a1 0.6 a 2  a 3  1.9 a 4

Subject to

(3.2.9) 2a1 3 a 2  5 a 3  8 a 4  lq for q 1,2,3

The optimal solution for each raw is as following

* T q 1: a1  1 0 0 1 with M1  2.23. RCC 2.23  1.9  0  0.33

* T q  2 : a2  0 2 0 0 with M 2  1.2 . RCC 1.2  1  0  0.2

* T q  3: a3  2000 with M3  0.66. RCC 0.66  0.6  0.06

The entering column is from the first raw so the column vector 1 0 0T must be added to the pattern. Thus, the new entering column is

* T a1  1 0 0 1 1 0 0 .

*  Step 3: Find a column vector d by solving the linear problem Bd a1 , we

obtain d 0 0 0.33 1 0 0.33 0T .

 Step 4: Find the leaving column by comparing ratio between x and d . The

ratios are

36

 400 150   T . Thus, t 150 and the leaving column is the

* forth column. Replace the leaving column with the entering column a1 . Now,

we have

0 0 3 1 0 0 0 350   350  0.6      1 0 0 0 0 0 0 200   200  1 0 1 0 0 0 0 0 133.33 0.33(t )   83.33  1 *      B  0 0 0 1 0 0 0 , x t  150  , c  1.9 0 0 0 1 1 0 0 210   210  0      0 1 1 0 0 1 0 26.67 0.33(t )   76.67  0      1 0 0 0 0 0 1 10   10  0

TT1 T  Step 5: Solving y c B , we obtain y  0.33 0.6 1 1.57 0 0 0 .

* * Then, add the vector column for both a2 and a3 . Test

* T a2  0 2 0 0 0 1 0 whether it can improve the solution or not by

T * computing y a22 c 1.2  1  0.2 . Thus, this pattern can improve the current

* solution and will be the next entering column for B . Then, solve Bd a2 and

compare the ratio. We obtain d 2 0 0 0 0 1 2T and

T ratio 175     76.67  , so t  76.67. Thus, the new B , x* , c

are as following

37

0 0 3 1 0 0 0 350 2(76.67)   196.69  0.6      1 0 0 0 0 2 0 200   200  1 0 1 0 0 0 0 0 83.33   83.33  1 *      B  0 0 0 1 0 0 0 , x 150   150  , c  1.9 0 0 0 1 1 0 0 210   210  0      0 1 1 0 0 1 0 76.67   76.67  1      1 0 0 0 0 0 1 10 2(76.67)   163.33  0

* T T * Test a3  2 0 0 0 0 0 1 next. Compute y a33 c 0.8  0.6  0.2 .

* * Thus, a3 can also improve the current solution further. Solve Bd a3 and

compare the ratio. We obtain d 1.33 0 0.67 0 0  0.67  0.33T and ratio =

147.5 125    T .

Hence, the updated B , x* , c are as following

0 0 2 1 0 0 0 196.69 1.33(125)   30  0.6      1 0 0 0 0 2 0 200   200  1 0 1 0 0 0 0 0 125   125  0.6 *      B  0 0 0 1 0 0 0 , x 150   150  , c  1.9 . 0 0 0 1 1 0 0 210   210  0      0 1 1 0 0 1 0 76.67 (0.67)125   160  1      1 0 0 0 0 0 1 163.33 (0.33)125   205  0

The first iteration ends here.

The second iteration begins.

TT1 T  Step 1: Solve y c B . We obtain y 0.3 0.6 1.2 1.6 0 0.2 0 .

th  Step 2: Solve the knapsack problem for each q raw which can be formulated

as following

38

(3.2.10) max0.3a1 0.6 a 2  1.2 a 3  1.6 a 4

Subject to

(3.2.11) 2a1 3 a 2  5 a 3  8 a 4  lq for q 1,2,3

The optimal solution for each raw is as following

* T q 1: a1  0 0 2 0 with M1  2.4 . RCC 2.4  1.9  0  0.5

* T q  2 : a2  0 0 1 0 with M 2  1.2 . RCC 1.2  1  0.2  0

* T q  3: a3  0 1 0 0 with M 3  0.6. RCC 0.6 0.6 0

The entering column is from the first raw so the column vector 1 0 0T

must be added to the pattern. Thus, the new entering column is

* T a1  0 0 2 0 1 0 0 .

* T  Step 3: Solve Bd a1 , we obtain d 4 2 0 0 1  2  4 .

 Step 4: Compute the ratio between x and d . The ratios are

7.5 100  210  T . Thus, t  7.5 and the leaving column is the

* first column. Replace the leaving column with the entering column a1 . Now,

we have

0 0 2 1 0 0 0 t  7.5  1.9      0 0 0 0 0 2 0 200 2(t )   185  1 2 1 0 0 0 0 0 125   125  0.6 *      B  0 0 0 1 0 0 0 , x 150   150  , c  1.9 1 0 0 1 1 0 0 210 t   202.5  0      0 1 0 0 0 1 0 160 2(t )   175  1      0 0 1 0 0 0 1 205 4(t )   235  0

39

TT1 T  Step 5: Solve y c B . We obtain y  0.3 0.475 0.95 1.6 0 0.05 0 .

* T Test a2  0 0 1 0 0 1 0 whether it can improve the current solution

T * or not. Compute y a22 c 0.95  0.05  1  0 . Thus, this pattern cannot

* T improve the current solution. Move on testing a3  0 1 0 0 0 0 1 .

T * Let us compute y a33 c 0.475  0.6   0.125 . Thus. Neither of the remaining

patterns can improve the current solution any further. This ends the second

iteration.

The third iteration begins.

 Step 1: Compute .

We obtain .

th  Step 2: Solve the knapsack problem for each q raw which can be formulated

as following

(3.2.12) max0.3a1 0.475 a 2  0.95 a 3  1.6 a 4

Subject to

(3.2.13) 2a1 3 a 2  5 a 3  8 a 4  lq for q 1,2,3

The optimal solution for each raw is as following

* T q 1: a1  1 0 0 1 with M1 1.9. RCC 1.9  1.9  0  0

* T q  2 : a2  0 0 1 0 with M 2  1.2 . RCC 0.95  0.05  1  0

* T q  3: a3  2000 with M 3  0.6. RCC 0.6 0.6 0

40

Thus, we can conclude that the optimal solution for this cutting stock

problem is

0 0 2 1 0 0 0 7.5 1.9  0 0 0 0 0 2 0 185 1 2 1 0 0 0 0 0 125 0.6 *  B  0 0 0 1 0 0 0 , x  150 , c  1.9 1 0 0 1 1 0 0 202.5 0  0 1 0 0 0 1 0 175 1  0 0 1 0 0 0 1 235 0

3.3 Cutting Stock Problem with Limited Number of available Cutting Knives In this case, one more additional knapsack constraint will be added to each column generation sub-problem. Let h be the number of available cutting knives.

Then, the new column generation sub-problems will be

m (3.3.1) max  yajj j1

Subject to

m (3.3.2)  wj a jk W k for kp1,2, , j1

m (3.3.3) ahjk  for kp1,2, , j1

Two of the possible approaches of solving this problem will be compared. The first one is to modify branch and bound algorithm. To do this, the branch and bound algorithm will be limited to update the feasible solution if and only if it satisfies constraint (3.3.4). The second method is to use Lagrangian Relaxation to relax the additional constraints (3.5.3) and use the subgradient method to find an optimal

41 solution. The Lagrangian Relaxation technique will be reviewed in the next sub- section.

3.3.1 Lagrangian Relaxation Review Lagrangian Relaxation technique is good for problems which the constraints can be divided into good and bad sets. The good constraints are constraints that can be satisfied easily and make the problem easy to be solved. The bad constraints make the problem very hard to be solved. The main idea of this technique is to relax the problem by removing the bad set of constraints and moving them into the objective function. Each of the bad constraints is assigned with weights which are called the

Lagrangian multiplier. The each Lagrangian multiplier is used as a penalty for a solution that does not satisfy the constraint of the Lagrangian multiplier. The

Lagrangian Relaxation is needed when this problem gets more complicated. This method is used with the limited knives constraint. Let K  number of knives available for each raw. Now, consider this cutting stock problem with limited knives constraint

n (3.3.4) min x j j 1

Subject to

(3.3.5) min

n (3.3.6)  aij x j b i for i 1,2, ,m j 1

m  (3.3.7) xKj i 1

42

x j  0 for jn1,2, ,

(2.3.7) are set of good constraints and (2.3.8) are set of bad constraints. Now, relax the problem by removing (2.3.8).

T (3.3.8) ZD = minc x ( d Bx )

Subject to

Ax b

 is a Lagrangian multiplier. To solve this, the subgradient optimization method is used.

The Subgradient optimization Method is as following

1. Firstly, Pick a starting point 0 .

2. Pick a subgradient stt b Ax , if st  0 , terminate the iteration because

the optimal solution has been found.

t t t t t 3. Compute max{0,   s }, where  is the stepsize.

4. tt1 and go to 2.

The stepsize  definition is critical for the speed of the convergence of the algorithm.

In this thesis, the stepsize method by Held and Karp is used.

* t ttZZ () (3.3.9)   u 2 b Axt where Z* is the value of the best solution for the original which is found so far, ut is

0 tt1 a constant which 02u and uu if ZD has not increased in the last T iterations otherwise uutt1  with 01 and T 1.

43

The relaxed sub-problem for the Lagrangian Relaxation method can be formulated as follows.

mm (3.3.10) max yj a j k h a jk jj11

Subject to

m  wj a jk W k for kp1,2, , j1

m * t ttZZ () The subgradient is stt h a and the step-size is   u k . For k jk m 2 j1 Wk  w i a jk i1

 Z* = the approximated optimal value of solution for the original solution.

 ut = decreasing adaption parameter

And the branch and bounded algorithm can be modified as following

k ˆ b  wii x k i1 ˆ  If hx i wk1 i1

nkc  ˆˆk1 o Use cxi b  a i x i  M , which M is the best solution found ii11ak1 

so far, as the pruning criterion.

k ˆ b  wii x k i1 ˆ  If hx i wk1 i1

o Look at the best of the remaining k1  l  n

44

k (3.3.11) ˆ b  aii x k ˆ i1 larg max cli min , h xˆ k1  l  n  al i1  

The Fathoming criterion in this case is

k (3.3.12) ˆ kkb  aii x c xˆˆ cmini1 , h  x  M i ilˆ i ii11al  

3.3.1 Example

Let us show the example for modified branch and bound algorithm. Let us consider this knapsack problem.

(3.3.13) max 2x1 5.5 x 2  3 x 3  8 x 4  x 5

Subject to

(3.3.14) x13 x 2  2 x 3  6 x 4  x 5  9 (3.3.15) x1 x 2  x 3  x 4  x 5  4

For this example, we initialize by

9 x1 min( ,4) 4 1

94 x2 min( ,0) 0 3

45

94 x3 min( ,0) 0 2

94 x4 min( ,0) 0 6

94 x5 min( ,0) 0 1

The initial best solution is

***** x14, x 2  0, x 3  0, x 4  0, x 5  0 and M 2*4 8

Reduce k from 5 until we found xk  0 which we found x1  4 . Change x1  3 and test the worthiness of the branch using (3.3.12).

6 6 6 6 6 max(5.5*min( ,1),3*min( ,1),8*min( ,1),1*min( ,1))  14  M 3 2 6 1 which is larger than M . Thus, this branch is worth exploring. Now, computing

93 x2 min( ,4  3)  1 3

x3  0

x4  0

x5  0

***** This solution improves the last one. Thus, x13, x 2  1, x 3  0, x 4  0, x 5  0 and

M 11.5

Reduce from 5 until we found which we found x2 1. Change x2  0 and, again, test the branch using (3.3.12).

46

6 6 6 6 max(3*min( ,1),8*min( ,1),1*min( ,1)  14  M 2 6 1 which is still larger than the current best solution. Next, computing

6 x3 min( ,4  3)  1 2

x4  0

x5  0

This solution yields 9 which is smaller than M . Thus, no need to replace .

Reduce k from 5 until we found xk  0 which we found x3 1. Change x3  0 and test the branch using (3.3.12).

66 6 max(8*min( ,1),1*min( ,1)) 14 61 which is larger than M . Thus, this branch is worth exploring. Now, computing

6 x4 min( ,4  3)  1 6

The value for this solution is 2*3 0  0  8*1  0  14 which is higher than

***** previous best value. Thus M 14 and x13, x 2  0, x 3  0, x 4  1, x 5  0

Now, reduce from 5 until we found which we found x4 1. Change

x4  0 and test the branch using (3.3.12).

6 6 max(1*min( ,1))  7  M 1

This branch is not worth exploring so backtrack to the next branch

47

Now, reduce k from 4 until we found xk  0 which we found x1  3. Change

x1  2 and test the branch using (3.3.12).

7   7   7  7 4 max(5.5*min( ,2),3*min( ,2),8*min( ,2),1*min( ,2))  15  M 3   2   6  1 which is larger than the current best solution. Next, computing

7 x2 min( ,4  2)  2 3

x3  0

x4  0

x5  0

The value for this solution is 2*2 5.5*2  0  0  0  15 which is higher than

***** previous best value. Thus M 15 and x12, x 2  2, x 3  0, x 4  0, x 5  0

Now, reduce from 5 until we found which we found x2  2 . Change

x2 1 and test the branch using (3.3.15).

4   4  4 9.5 max(3*min( ,1),8*min( ,1),1*min( ,1))  12.5  M 2   6  1

This path cannot improve the solution further. Thus, move on to the next

branch. Reduce from 5 until we found which we found x2 =1. Change x2 1

to x2  0 and test the branch using (3.3.12).

7   7  7 4 max(3*min( ,2),8*min( ,2),1*min( ,2))  12  M 2   6  1

48

Again, this branch cannot improve the solution either. Move on to the next

branch. Reduce k from 5 until we found xk  0 which we found x1  2 . Change x1 1 and test the branch using (3.3.12).

8   8   8  8 2 max(5.5*min( ,3),3*min( ,3),8*min( ,3),1*min( ,3))  13  M 3   2   6  1

This branch also cannot improve the solution so move on to the next branch.

Reduce from 5 until we found which we found x1 1. Change x1  0 and test the branch using (3.3.12).

9   9   9  9 0 max(5.5*min( ,4),3*min( ,4),8*min( ,4),1*min( ,4))  16.5  M 3   2   6  1

This path may improve the solution. Thus, compute

9 x2 min( ,4  1)  3 3

x3  0

x4  0

x5  0

The value for this solution is 0 5.5*3  0  0  0  16.5 which is higher than

***** previous best value. Thus M 16.5 and x10, x 2  3, x 3  0, x 4  0, x 5  0

Now, reduce from 5 until we found which we found x2  3 . Change

x2  2 and test the branch using (3.3.12).

3   3  3 11 max(3*min( ,2),8*min( ,2),1*min( ,2))  14  M 2   6  1

49

This branch cannot improve the solution. Move on to the next branch.

Now, reduce k from 5 until we found xk  0 which we found x2  2 . Change

x2 1 and test the branch using (3.3.12).

6   6  6 5.5 max(3*min( ,3),8*min( ,3),1*min( ,3))  14.5  M 2   6  1

This branch cannot improve the solution either. Therefore, move on to the next branch.

Reduce from 5 until we found which we found x2 1. Change x2  0 and test the branch using (3.3.12).

9   9  9 0 max(3*min( ,4),8*min( ,4),1*min( ,4))  12  M 2   6  1

This branch cannot improve the solution further.

Thus, the optimal solution for this problem is

***** x10, x 2  3, x 3  0, x 4  0, x 5  0 with M 16.5. The modified branch and bound can be illustrated as below.

50

Figure 3

51

Chapter 4 Computational Results

4.1 Cutting Stock with Multiple Raw Material In this section and others, randomly generated problem were used (5). Let m

be the number of the order, M be the number of the raw material type, wi be the

th required length of the i order, bi be the required quantity of the order, Wj be

th the length of the j raw material, Bj be the quantity of the raw material and c j be the cost for each roll of the raw material for im1,2, , and jM1,2, , . Each

problem class is characterized by the following conditions: w wi w, b bi b for

1im andWWWj , BBBj , c cj c for 1jM. The bounds of

required pieces are determined by w v1 W and w v2 W for some v1 and v2 . The

variables , wi , Wj , bi and Bj will be randomized uniformly.

 Results for m  40 , M  3 v1\v2 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.001 14.4 36.1 42.6 32.6 17 5.6 1.3 0.5 13.3 32.5 49.4 34.4 18 5.5 1.2 0.5 0.01 15.8 38.3 59.7 48 10.4 3.4 0.9 0.6 14.5 38.2 61.8 47 9.6 3.1 0.8 0.5 0.05 20.5 35.7 36.6 16.5 6.4 3.8 1.4 0.5 20.1 35.4 38.4 16.1 6.3 3.5 1.3 0.5 0.15 1645 65 30.6 7.5 3.3 1.5 0.4 0.2 1562 65 29.5 7.3 3.1 1.4 0.3 0.2 Table 4

52

 Results for m  40 , M  5 v1\v2 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.001 25.2 45.2 70.7 29.3 16.4 7.1 3.9 1.1 23.3 48.9 66.4 26.1 14.4 6.7 3 0.7 0.01 26.8 60.6 100.1 56.7 22.2 5.2 2.1 1.2 27.1 57.1 103.4 54.3 22.2 4.6 1.7 1 0.05 33.2 52.3 72.3 42.3 6 7.7 1.8 1.2 31.4 48 67.8 40.1 6.5 7.7 1.1 0.9 0.15 3022.4 80.9 29.2 11 3.3 3.3 1.8 0.7 2676.3 69 24.9 9.7 3.5 3.1 1.5 0.6 Table 5

 Results for m  60, M  3 v1\v2 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.001 37.6 80.6 99 114.4 61.8 24.6 7.2 2.4 34.5 86 98.4 121.2 71.1 26.4 6.4 2.3 0.01 40.3 81.7 155.9 164.5 77.4 26.9 6.1 2.6 39.9 89.1 164.7 134 76.2 25.8 4.9 1.7 0.05 51.8 87.8 194.9 164.5 53.7 13.5 5.8 2.7 45.9 86.1 199.5 168.4 50.3 13.5 4.9 2.6 0.15 3022.4 257.1 179.1 29.4 12 5.2 4.1 3.4 2676.3 229.7 170.4 27.1 9.7 5.4 3.7 2.7 Table 6

 Results for , M  5 v1\v2 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.001 94.9 109.1 154.5 157.1 95 34.6 21.6 5.5 86.8 101.4 169.9 187.5 86.2 37.1 20.1 4.8 0.01 67.2 117.9 229.9 184.8 40.4 23.7 10.2 11.7 56.8 111.7 250.3 166.1 41.2 15.9 10.1 11.7 0.05 79.6 213 160.5 134.5 69.9 15.4 9.6 5.4 68.3 187.7 161.3 136.6 69 13.6 8.1 4 0.15 3235.9 224.4 196.6 59.6 32.5 7.7 4.4 5.1 3016 212.3 190.3 55.9 29.1 6.3 3.3 3.1 Table 7

53

 Results for m 100 , M  3 v1\v2 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.001 126.9 168.1 88.9 98.2 97.9 40.6 63.8 29 150 164.1 92.4 95.6 95.7 55.2 61.1 24.4 0.01 129.2 212.5 187.1 128.9 69.6 45.5 62.2 15.1 119.4 194.9 207.6 126.6 65.9 42.5 58.8 11.2 0.05 159.9 302.7 286 208.8 72.5 76.5 28.6 7.2 151.3 332.9 284 206.8 75 69.1 32.8 6.3 0.15 - 848.2 485.1 204.6 282.8 44.1 15.8 16.9 - 764.2 499.1 187.6 275.8 42.3 15.6 16.6 Table 8

 Results for , M  5 v1\v2 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.001 200.3 458 172.3 116.9 68.8 82 73.3 30.3 198.1 456.8 174 87.5 63.1 86.1 59.1 27.8 0.01 249.7 207.4 191.7 107.5 93.9 81.8 76.1 37.8 222.6 224.7 195.8 116.6 98.8 90.9 66.1 31.9 0.05 265.1 353.9 528.3 171.3 113.43 133.1 33.1 20.6 269.1 416 513.2 167.6 103.5 143.9 28.6 11.7 0.15 - 872 698.1 330.7 148.2 109.4 41.9 32.8 - 783 672 319.6 146 78.8 29.1 23.9 Table 9

From the results, the running time will increase when the ratio of v1 and v2 increases. The implemented method has a bit better running time than the regular method.

54

4.2 Cutting Stock with Multiple Limited Number of Rolls of Raw Material The following results shown

 Results for m  40 , M  3 v1\v2 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.001 17.3 31.3 59.8 48.2 13.5 2.2 2.3 0.4 0.01 16 37.4 59.1 25.7 19 4.4 1.8 2.6 0.05 26.4 38.9 57.4 25.9 8.9 3 1.7 0.8 0.15 1416.8 44.3 29.9 9.3 2.3 1.3 0.6 0.8 Table 10

 Results for , M  5

v1\v2 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.001 24.9 100.4 79.8 55.9 8.8 5.6 1.3 0.8 0.01 29.3 70.7 110.3 51.1 20.2 4.3 2.7 1.7 0.05 34.1 63.4 74.7 33.9 8.1 2.8 1.7 1.1 0.15 1209.9 76.7 46.1 12.7 4.7 2.1 1.2 0.8 Table 11

 Results for m  60, M  3 v1\v2 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.001 23.3 76.2 118.5 138.7 63.9 31.1 11.8 2 0.01 38.4 74.3 128.2 130.5 68.4 21.6 13.3 9.5 0.05 69.8 74.5 156.3 84.9 55.3 28.9 5.9 12.9 0.15 57353 113.7 96 46.8 25.6 17.5 10.6 8.6 Table 12

 Results for m  60, M  5 v1\v2 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.001 54.1 139.6 200.6 231.9 68.7 64.6 8 4.2 0.01 81 171.1 276.4 218.2 91.1 15.3 6 3.4 0.05 113.2 173.2 314.5 129.2 47.6 15.4 6 7 0.15 57716 518.8 195.7 109.8 21.8 10.2 7 6 Table 13

55

m = 40, M = 3 m = 40, M = 5 400 m = 60, M = 3 m = 60, M = 5 350

300

250

200

150

100

Running Time (Seconds) Time Running

50

0

0.0 0.2 0.4 Ratio Between V1 and V2

Figure 4 The running time increases drastically for larger ratio when the number of order change. The running time for the ratio that nears 1 is highest because it is much harder to find an optimal pattern when there are a lot of similar orders that need to be fulfilled. Below is the graph to compare the running time and ratio for each data sets.

4.3 Cutting Stock with Limited Number of Available Cutting Knives In this scenario, the comparison between the Lagrangian relaxation method and the modified branch and bound is made. The upper results are from the modified branch and bound and the lower results are from Lagrangian relaxation method.

Again, the data sets for testing are randomly generated by the same algorithm in section 4.3. In this test,

56

For m  40 , M  5 and knife  4

 The running time results v1\v2 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.001 0.04 0.04 0.04 0.05 0.03 0.09 0.2 0.3 0.05 0.06 0.06 0.16 0.1 0.2 0.7 0.08 0.01 0.04 0.04 0.04 0.04 0.04 0.1 0.4 0.4 0.5 0.05 0.09 0.09 0.2 0.1 0.07 0.08 0.05 0.06 0.06 0.06 0.05 0.11 0.14 0.5 0.42 3.2 0.2 0.24 0.13 0.14 0.14 0.2 0.32 0.15 - 22 3.26 2.7 2.3 1.2 1.45 0.68 - 9.7 1.37 0.9 0.4 0.6 0.29 0.35 Table 14 cc   The cost different results ( 21100 ) c2 v1\v2 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.001 0.7 0.18 0.13 0.69 0.40 1.97 3.79 2.08 0.01 2.32 1.94 0.3 0.2 0.77 1.74 2.78 3.88 0.05 11.92 6.43 5 2.82 3.03 4.42 6.49 7.14 0.15 - 17.3 10.82 10.4 9.57 8.75 9.16 10.32 Table 15

Modified Tree Lagrangian

20

10

RunnungTime (Seconds) RunnungTime 0

0.0 0.2 0.4 0.6 Ratio Between V1 and V2

Figure 5 57

20

15

10

Cost Different (%) Cost Different 5

0

0.0 0.2 0.4 0.6 Ratio Between of V1 and V2

Figure 6

 The node count results v1\v2 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.001 1131 1209 1053 2383 955 6317 17291 28670 5562 5590 6782 21261 12428 29203 101850 11057 0.01 1014 1111 1443 1237 1193 8843 44084 39808 69490 5231 11296 10859 30707 14423 7491 8978 0.05 2480 2614 2644 2169 8579 10921 5147 42491 4957 30304 33324 16906 18030 18903 27793 43619 0.15 - 2435106 348860 291689 255750 126790 156620 71188 - 1411810 201562 128360 62290 86420 42845 51009 Table 16

58

For m  40 , M  5 and knife 10

 The running time results v1\v2 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.001 0.04 0.05 0.04 0.17 1 1 0.7 0.7 0.3 0.2 0.06 0.1 0.07 0.15 0.2 0.06 0.01 0.04 0.04 0.12 3 3 0.3 2.9 0.8 0.4 0.14 0.06 0.25 0.25 0.8 0.1 1.7 0.05 4.2 13.4 2.6 8.5 4.6 2.1 1.6 1.3 100.9 13.5 17.3 0.9 0.34 0.3 0.24 1.2 0.15 - 208 38.7 12.4 5.2 2.4 1.5 1 - 5 1.6 0.5 0.3 0.3 0.17 0.16 Table 17

 The cost different results v1\v2 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.001 0.048 0.59 0.11422 0.34 7.33 5.034 5.4 4.8 0.01 0.47 0.62 0.881646 2.9 2.92 1.08 5.99 5.28 0.05 10.11 12.11 5.386819 6.24 5.21 11.73 16.44 10.01 0.15 - 6.98 14.00659 22.35 15.63 10.11 10.11 10.79 Table 18

59

Modified Tree Lagrangian

200

100

Runnung Time (Seconds) Time Runnung 0

0.0 0.2 0.4 0.6 Ratio Between V1 and V2

Figure 7

20

15

10

Cost Different (%) Cost Different 5

0

0.0 0.2 0.4 0.6 Independent Variable

Figure 8

60

 The node count results v1\v2 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.001 2788 4166 3387 17749 98958 108230 72148 75493 42838 32290 7843 12276 8532 19657 31952 8217 0.01 2730 3248 11530 331400 331400 34455 244160 80799 53775 19292 7062 36104 36104 120530 8736 236860 0.05 47584 1423200 253340 917920 471860 189330 163170 118910 12978000 1828100 2060200 136250 43693 40320 32050 137590 0.15 - 12207000 4343700 1375700 578300 270900 165560 111260 - 889939 203770 78263 46824 43656 25404 23336 Table 19 The Lagrangian Relaxation method is slower than the modified Branch and

Bound method in cases that the knife is low but the Lagrangian running time is better when knife is high. The Lagrangian is better when the knife variable is high because the branch and bound have to cycle through a lot more nodes than Lagrangian. Still,

Lagrangian method cannot give a global optimal solution for the problem except

when the ratio of between v1 and v2 is small.

61

Chapter 5 Conclusions

From the testing results in the previous chapter, we can conclude that the branch and bound algorithm can solve the knapsack sub-problem for this cutting stock problem more efficiently than dynamic programming. The running time using the branch and bound is less than dynamic programming in every cases. Dynamic programming needs to fill an elaborated table function completely to find an optimal solution, whereas the branch and bound method employs very effective pruning scheme to eliminate a large number of inferior nodes. The branch and bound algorithm can fathom part of the columns that cannot improve the current solution.

Because of this confirmation, we employ the branch and bound approach to solve all the knapsack sub-problems generated by the column generation technique in the remainder of the thesis.

For the cutting stock problem with multiple raws, the running time increases when the number of required orders and number of raws increase. The running time

is highest when the ratio of the largest width/length of the order ( v2 ) to the smallest

width/length of the order ( v1 ) is high. When the ratio is high, the number of possible patterns of cut to fill multiple orders become extremely large. This makes the problem much harder to solve.

Lastly in solving the cutting stock problem with limited number of available knives, the modified branch and bound method proves to be better than the

62

Lagrangian relaxation in running time when the number of cutting knives is small. In this case, the number of nodes explored by the modified branch and bound method is drastically reduced by the modified pruning scheme developed to handle the second knapsack constraint (representing the limited number of knives). The number of iterations in the Lagrangian relaxation technique does not appear to be much affected

(reduced) by the small number of the cutting knives available.

On the other hand, when the number of cutting knives available is large, the number of nodes (patterns) reduced by the modified pruning scheme (even though it is still very large) cannot compensate for the exponential increase in the possible number of nodes (patterns) that need to be explored in the solution tree. In contrast, the corresponding increase in the number of iterations in the Lagrangian relaxation method is not exponential. Hence, the Lagrangian relaxation is more efficient in terms of running time in this case.

In terms of quality of solution, the modified branch and bound method is an exact method, meaning that if it is allowed to run to completion (terminates under the designed terminating conditions), it will always guarantee optimality. In theory, the Lagrangian relaxation method is also an exact solution method. But in practice, the step-finding step is often executed approximately at best. So terminating at an optimal solution is not always guaranteed.

63

Bibliography

1. A Linear Programming Appoarch to the Cutting-Stock Problem. Gilmore, P. C. and

Gomory, R.E. s.l. : INFROMS, Vols. , Vol.9, No. 6. (Nov.-Dec., 1961, pp.849-859.

2. Cormen, Thomas H., et al. Introduction to Algorithms. London, England : The MIT

Press, 2009. 978-0-262-03384-8.

3. Wayne L. Winston, Munirpallam Venkataramanan. Introduction To

Mathematical Programming. Pacific Grove : Brooks/Coles-Thomson Learning, 2003.

ISBN 0-534-35964-7.

4. Chvatal, Vasek. Linear Programming. New York : W.H. Freeman and Company.

5. Solving the General One-Dimensional Cutting Stock Problem with a Cutting Plane

Approach. Belov, Gleb and Scheithauer, Guntram . 2000.

6. Jiˇrí Matoušek, Bernd Gärtner. Understanding and Using Linear Programming.

New York : Springer, 1965.

7. Ferguson, Thomas S. Linear Programming. Los Angeles : Department of

Mathematics of University of California, Los Angeles.

8. Lasdon, Leon S. Optimization Theory for Large Systems. Optimization Theory for

Large Systems. 2002, pp. 20-40.

9. Gass, S. I. Linear Programmimg: Methods and Applications,. New York : McGraw-

Hill, 1958.

10. Dantzig, G.B. Lineasr Programmimg ans Extensions. Princeton : Princeton

Unisersity Press, 1963.

64

11. Different Approaches to Solve the 0/1 Knapsack Problem. Hristakeva, Maya and

Shrestha, Dipti.

65