Mathematical Models, Heuristics and for Efficient Analysis and Performance

Evaluation of Job Shop Systems Using Max-Plus Algebraic Techniques

A dissertation presented to

the faculty of

the Russ College of Engineering and Technology of Ohio University

In partial fulfillment

of the requirements for the degree

Doctor of Philosophy

Manjeet Singh

December 2013

© 2013 Manjeet Singh. All Rights Reserved. 2

This dissertation titled

Mathematical Models, Heuristics and Algorithms for Efficient Analysis and Performance

Evaluation of Job Shop Scheduling Systems Using Max-Plus Algebraic Techniques

by

MANJEET SINGH

has been approved for

the Department of Mechanical and Systems Engineering

the Russ College of Engineering and Technology by

Robert P. Judd

Professor of Industrial and Systems Engineering

Dennis Irwin

Dean, Russ College of Engineering and Technology

3

ABSTRACT

SINGH, MANJEET, Ph.D., December 2013, Mechanical and Systems Engineering

Mathematical Models, Heuristics and Algorithms for Efficient Analysis and Performance

Evaluation of Job Shop Scheduling Systems Using Max-Plus Algebraic Techniques

Director of Dissertation: Robert P. Judd (127 pp.)

This dissertation develops efficient methods for calculating the makespan of a perturbed job shop. All iterative scheduling algorithms require their performance measure, usually the makespan, to be calculated during every iteration. Therefore, this work can enhance the efficiency of many existing scheduling heuristics, e.g. Tabu

Search, Genetic Algorithms, etc. This increased speed provides two major benefits. The first is the capability of searching a larger solution space, and second is the capability to find a better solution due to the extra time.

The following is a list of major highlights of this dissertation. Th e dissertation extends the hierarchical block diagram model formulation and composition that was originally proposed by Imaev[2]. An is developed that reduces the complexity of calculating the makespan of the perturbed schedule of job shop with no recirculation from O(MNlogMN) to O(N2), where M is the number of machines and N the number of parts. An efficient algorithm that calculates kleene star of a lower triangular matrix is

( ) which is presented. This algorithm has complexity of of the traditional approach. Finally, a novel pictorial methodology, called the SBA (Serial Block Addition), is developed to calculate the makespan of a perturbed job shop. A very efficient single perturbed machine scheduling algorithm, with complexity of O(N2), is derived using the 4

SBA method. The algorithm was tested on 10,000 randomly generated problems. The solutions provided by scheduling algorithm were 95.27% times, within a 3% deviation of the optimal solutions.

5

ACKNOWLEDGEMENTS

I would like to express my deep gratitude to all the people who made this dissertation possible. First and foremost, a hearty thanks to my advisor Dr. Robert P.

Judd. It was his vision and constant support which helped me through the continuous struggle of understanding the known and finding the unknown. Without his guidance and support, this dissertation wo uldn't have been possible. I will never forget one of his invaluable saying: “Most of the time there is a way to simplify a seemingly complex problem”.

Additionally, I would like to thank Dr. Gursel Suer for all his support and guidance throughout my graduate studies at Ohio University. I would also like to thank all of the members of my dissertation committee, namely: Dr. Namkyu Park, Dr. Andy

Snow and Dr. Ken Cutright. Their invaluable suggestions and constructive criticism helped me immensely during my research.

I would also like to thank Tonya Seelhorst who always found a way to help me in my time of need, as well as my close friend, Tianjiao Chen, who was the source of inspiration which lead to this fruitful journey. I send a big hug and thanks to my sister,

Indu, who I have missed having around immensely, for all the encouragement. I owe all my achievements to my parents, Mr. Sunder Singh and Mrs. Ram Rati, who have inspired me throughout my life with their hard work and perseverance. Finally, I would like to thank all my friends who still manage to love me despite all my idiosyncrasies. 6

TABLE OF CONTENTS Page

Abstract…...... 3 Acknowledgements ...... 5 List of Tables ...... 8 List of Algorithms ...... 10 List of Figures ...... 9 Chapter 1: Introduction ...... 11 1.1 Brief Overview of the Problem ...... 11 1.2 Contributions ...... 12 Chapter 2: Literature Review ...... 16 2.1 Max Plus Algebra in Scheduling ...... 16 2.2 Scheduling Techniques ...... 21 2.2.1 Scheduling of Cyclic Systems ...... 21 2.2.2 Three Machine Scheduling ...... 23 2.2.3 Enumerative Techniques...... 24 2.2.4 Constructive Algorithms ...... 25 2.2.5 Iterative Algorithms ...... 25 2.3 Calculation of Makespan of a Job Shop ...... 26 2.4 Missing Areas in Existing Research ...... 27 Chapter 3: Max Plus Algebra ...... 28 Chapter 4: Block Diagram Modeling Approach ...... 31 4.1 Applying the Block Diagram Approach to a Single Machine ...... 33 4.2 An Example Problem ...... 36 Chapter 5: Modeling the Job Shop System ...... 39 5.1 Algorithm for Efficient Calculation of ...... 42 5.2 Special Cases ...... 43 5.2.1 Generic Flow Shop ...... 43 5.2.2 Flow Shop with all Jobs Flowing from G to F ...... 44 5.2.3 Recirculation of Jobs in G ...... 45 5.2.4 No Interaction between F and G...... 45 5.3 Example Problem ...... 46 Chapter 6: Bi-Part Modeling of a Job Shop Using Max Plus Algebra ...... 52 6.1 Determining the Makespan of a System ...... 55 6.2 Calculation of , and ...... 55 6.3 Analysis of the Makespan equation of the System – A Special Case ...... 60 Chapter 7: Modeling of a Job Shop Without Recirculation of Jobs ...... 62 7.1 Makespan Equation for a Job Shop without Recirculation of Jobs ...... 62 7.2 Calculation of F under Scheduling Perturbations for V Containing Single Machine ...... 62 7.3 Algorithm for Efficient Calculation of Makespan under Scheduling Perturbations for V ...... 65 7.4 Example Problem ...... 67 7

Chapter 8: Modeling of Job Shop With Recirculation of Jobs ...... 70 8.1 Example Problem ...... 73 8.2 Test for Feasibility of a Schedule ...... 74 8.3 Computation of the Star of a Lower Triangular Matrix ...... 76 8.4 Applying the Block Diagram Approach for Calculation of F for V Containing a Single Machine and Jobs going through Recirculation and Perturbation ...... 77 8.5 Algorithm to Calculate the Makespan of a Job Shop System with Recirculation and Reordering ...... 80 8.6 Example Problem ...... 80 Chapter 9: Heuristic Algorithm for Minimizing Makespan of Job Shops With Recirculation ...... 83 9.1 Scheduling Equivalency ...... 84 9.2 Scheduling Heuristic for Minimizing the Makespan ...... 89 9.2.1 Algorithm for Sorting the Jobs According to Precedence ...... 93 9.2.2 Algorithms for Finding beq and ceq ...... 96 9.2.3 Tracking Gaps ...... 101 9.2.4 Calculating the Feasible Range Properties ...... 101 9.2.5 Calculating the Length of the CSBAD after Addition of a Row ...... 102 9.2.6 Degrees of Freedom ...... 104 9.2.7 Algorithm Description ...... 105 9.2.8 Computational Complexity of the Algorithm 9.4 ...... 108 9.3 Example Problem ...... 109 9.4 Experimentation ...... 112 Chapter 10: Summary ...... 115 10.1 Summary of Key Points in this Research ...... 115 10.2 Future Work ...... 117 References ...... 118

8

LIST OF TABLES Page

Table 4.1: Processing order and processing time...... 36 Table 5.1: Processing times ...... 46 Table 5.2: Processing order for jobs ...... 47 Table 5.3: Initial processing order for machines ...... 47 Table 6.1: Processing times ...... 57 Table 6.2: Processing order for jobs ...... 57 Table 6.3: Initial processing order for machines ...... 57 Table 7.1: Processing order of jobs/respective processing times ...... 67 Table 7.2: Initial schedule for all the machines ...... 67 Table 7.3: Application of the makespan algorithm ...... 68 Table 8.1: Application of algorithm 8.2...... 81

9

LIST OF FIGURES

Page

Figure 4.1: Manufacturing system block ...... 31 Figure 4.2: Representation of a operation of job n on machine m ...... 32 Figure 4.3: An example structure...... 34 Figure 4.4: An example structure (flow diagram) ...... 35 Figure 4.5: Example problem (single machine structure) ...... 37 Figure 5.1: Block diagram for modeling the composition of two subsystems ...... 39 Figure 5.2: Modeling the addition of a machine ...... 44 Figure 5.3: Flow shop with all jobs flowing from G to F ...... 44 Figure 5.4: Recirculation of jobs in G ...... 45 Figure 5.5: No interaction between F and G ...... 46 Figure 5.6: Subsystem consisting of all operations from M1 and M2 ...... 48 Figure 5.7: Subsystem consisting of all operations from M1, M2 and M3 ...... 49 Figure 5.8: Subsystem consisting of all operations from M1, M2, M3 and M4 ...... 50 Figure 6.1: Division of a manufacturing system into I and V...... 52 Figure 6.2: Relationship between the input and output of the manufacturing system ...... 54 Figure 6.3: Generic structure of I and V ...... 56 Figure 6.4: (a) Composition of M1 and M3 ...... 58 Figure 6.4: (b) Composition diagram of M1 and M3 ...... 58 Figure 6.5: (a) Composition of M1, M2 and M3 ...... 59 Figure 6.5: (b) Composition diagram of M1, M2 and M3 ...... 59 Figure 7.1: Special structure of variant subsystem ...... 64 Figure 8.1: Variant subsystem with recirculation ...... 71 Figure 8.2: Example structure of a generic V with recirculation ...... 73 Figure 8.3: Partial adjacency graph for ...... 75 Figure 8.4: Constructive division of a lower triangular matrix ...... 77 Figure 8.5: Variant subsystem with recirculation and reordering ...... 79 Figure 9.1: SBAD for the general case ...... 84 Figure 9.2: SBAD for the base case ...... 86 Figure 9.3: SBAD for the inductive step ...... 88 Figure 9.4: CSBAD for SBAD shown in figure 9.3 ...... 90 Figure 9.5: Algorithm Strategy Description ...... 91 Figure 9.6: Example problem with 8 jobs ...... 94 Figure 9.7: Estimating the values of vector beq ...... 97 Figure 9.8: Estimating the values of vector ceq ...... 98 Figure 9.9: Total length case 1 ...... 103 Figure 9.10: Total length case 2 ...... 104 Figure 9.11: CSBAD after the addition of first job ...... 110 Figure 9.12: CSBAD showing optimal schedule for the example problem ...... 111 Figure 9.13: Graphical representation of 1st experimentation results ...... 113 Figure 9.14: Graphical representation of 2nd experimentation results ...... 114 10

LIST OF ALGORITHMS

Page

Algorithm 5.1: Efficient calculation of ...... 43 Algorithm 7.1: Efficient calculation of the MS ...... 66 Algorithm 8.1: Efficient calculation of the kleene star of a lower triangular matrix L .....76 Algorithm 8.2: Efficient calculation of the makespan for system with recirculation ...... 80 Algorithm 9.1: Determination of levels for all jobs ...... 96 Algorithm 9.2: Determination of ...... 99 Algorithm 9.3: Determination of ...... 100 Algorithm 9.4: Scheduling algorithm for minimizing the makespan ...... 106

11

CHAPTER 1: INTRODUCTION

Job shops are a special category of manufacturing systems. In a job shop, the flow of resources through the jobs is not identical. This means that each job might not require the machines in the same order for processing. Also, all the machines may not be required by all the jobs. In a job shop scheduling problem, the number of schedules generated is equal to the number of machines in the system, because each machine can have a distinct schedule. This work focuses on modeling deterministic discrete-event job shop manufacturing systems using max plus algebra. In a deterministic manufacturing system, the routes of all the jobs, the of jobs on the machines and the processing times are predetermined. So a deterministic manufacturing system is always free of choices or conflicts [74]. This work presents modeling approaches of the system with and without recirculation of jobs. A recirculation is defined as a state when the arrival times of the parts for any machine in the system are dependent upon the departure times of these parts.

1.1 Brief Overview of the Problem

For many years the optimal scheduling of job shops has been a problem of interest for researchers in the field of operations research. The job shop scheduling problem is classified as a NP hard problem [1]. A common approach problems where it is impossible to find the optimal solution in feasible time is to solve them using heuristics, like Tabu

Search [3], Simulated Annealing [4] , Genetic Algorithms [5], Ant Colony Optimization

[6], etc. 12

Heuristics are algorithms that find solutions (good/acceptable solutions) in the proximity of the optimal solution in a time. For solving the problem, an iterative approach is applied by taking an existing solution (schedule) and altering that schedule systematically to get closer to an optimal solution. Any alteration/change in schedule is defined as perturbation. In this work the word “perturbation” will mean a change in schedule in the system, irrespective of however large or small the change maybe. Heuristics depend on the how fast the next schedule is found and the speed with which its makespan (a common performance measure) is calculated for each new schedule. Makespan is the difference between the start of processing the first part to the time when all the parts are done processing. The main focus of this work is to develop methods to calculate the makespan of the altered schedule faster. This approach can be used in conjunction with certain categories of heuristics to save calculation time, thereby allowing the heuristics to converge towards the optimal solution faster.

1.2 Contributions

The contributions of this work in the field of job shop scheduling include:

1. The development of an efficient equation to calculate the system matrix, which maps

job arrival times to job leaving times on a single machine. The computational

complexity of calculating the system matrix is O(N2) (where N is number of

operations on the given machine).

2. A new approach to calculate the system matrix of the composition of two subsystems.

This approach can be used to calculate the system matrix for the entire job shop by

constructively composing system matrices of all the machines. The computational 13

complexity of calculating the composition of two subsystems is O(O3) (where O is

maximum of number of operations on given subsystems).

3. The development of a new modeling approach is developed that divides the system

into two parts; variant and invariant. This approach is used to find various efficient

models of job shop manufacturing system that are used in later chapters of this

document.

4. The development of an efficient algorithm that reduces the computational complexity,

when compared to the traditional approach, to calculate the makespan when

perturbations in the system are confined to a single machine is derived. This model

does not allow recirculation of jobs. The computational complexity of calculating the

makespan using this algorithm is O(N2) (where N is number of operations in the

variant subsystem (single machine)).

5. The development an extension of the aforementioned algorithm is developed that

calculates the makespan for perturbations in the variant when recirculation of jobs is

allowed. The computational complexity of calculating the makespan using this

( ) algorithm is , where O is the number of operations on the machine.

6. Finally, the development a new scheduling algorithm for a single machine in a job

shop. The objective of the algorithm is to propose a schedule on a single machine

variant to minimize the makespan for the given system. The computational

complexity of this algorithm is O(N2), where N is number of operations in the variant

subsystem. 14

The dissertation is organized into a set of chapters, as follows: Chapter 2 presents a literature review of the previous research in scheduling using max plus algebraic techniques. It also gives a brief introduction to enumerative and heuristic techniques used to solve job shop scheduling problems. Chapter 3 provides an introduction to the basics of max-plus algebra, which will be helpful in understanding of the usage of the algebraic techniques presented in this work. Chapter 4 presents an introduction of Imaev’s Block

Diagram [2] approach. It is then used to model the structure of variant subsystem consisting of operations a single machine. The new work in this chapter identifies a special structure of arranging elements in the system equation. Chapter 5 presents an approach to model the composition of two subsystems. This approach can be used to develop the system matrix of an entire job shop (relationships between outputs and inputs of a system) by constructively adding all the subsystems. Chapter 6 presents the core methodology proposed in this work. It divides the job shop system into two sub-systems: the variant V and invariant I. Further, chapter 6 presents efficient modeling methodology and an analysis of proposed bipartite structure of the system using max plus algebra.

Chapter 7 develops an algorithm for fast computation of the makespan when there is a change (perturbation) in a schedule in the variant subsystem and there is no recirculation in the variant. Chapter 8 extends the work in chapter 7 to develop an algorithm for fast computation of makespan when the variant subsystem does have recirculation. Chapter 9 develops a scheduling algorithm to minimize the makespan when there is perturbation in the variant subsystem which consists of only one machine. Chapter 10 summarizes the 15 work presented in this dissertation and also provides the scope of future research in this field.

16

CHAPTER 2: LITERATURE REVIEW

This chapter presents the previous research done in the area of max plus algebra, as it applies to scheduling. This review will help to ascertain the novelty of the work proposed for the dissertation. It also contains the introduction to some enumerative and heuristic techniques used in job shop scheduling.

2.1 Max Plus Algebra in Scheduling

Max plus is a linear algebra over a mathematical structure called a dioid [7]. Max plus algebra is an effective tool for modeling discrete events systems. This algebra can be used to express the event timing dynamics of a deterministic discrete event system (like a job shop manufacturing system) by means of linear equations. Max plus algebra has a unique theoretical structure that analyzes different performance measures and behavior of manufacturing systems [2]. The equations which describe event timings, either in max- plus linear queuing models or in timed event graphs can be expressed using max plus algebra. System specifications can also be used to generate event timing equations. Four methods exist to derive max plus models for a deterministic manufacturing system, they are:

1. Timed event graph or a directed graph representation of the system [[39], [40],

[41]].

2. Max plus linear queuing networks [[42], [43], [44]].

3. The system specifications [ [45], [46]].

4. Block diagrams [2]. 17

Nambiar [38] used max plus algebraic techniques to model cyclic flow shops. He used max plus algebra to find mathematical formulations that could be used to calculate the makespan of a cyclic permutation flow shop. He proposed a new idea that detects possible regions for the improvement of the given/existing schedule. He called this novel idea opportunities. He proposed heuristics to efficiently calculate period in case of a perturbation in the schedule. He proposed a construction heuristic which was based on the NEH heuristic. He also proposed five improvement heuristics for finding the schedule with a minimum period.

Imaev [2] proposed the block diagram approach to model manufacturing systems which is used in this work. The block diagram approach is a pictorial representation of the manufacturing system. It shows the flow jobs through the machines in a system as well as the flow of machines over the jobs. This approach provides a clear representation of the relationships between system variables; therefore it helps users visualize the system. The block diagram approach has been used in this project to model the job shop manufacturing system. The current work will use a model which considers a buffer in the system. Therefore, the system variables - three inputs and three outputs - as shown in

Imaev’s work, are reduced to two inputs and two outputs in this project’s model. Imaev also showed that his approach can be used to model both job shop and cyclic flow shop.

Imaev [2] also showed an interesting property of cyclic flow shops. The eigenvalue of its system matrix gives the period of the cyclic flow shop system, and the eigenvector of this value leads to the computation of the steady-state periodic schedule of the flow shop system. He also showed that the system matrix is an inverse Monge matrix. 18

He derived an algorithm for computing a max-plus algebraic eigenvector of an inverse

Monge matrix with a maximum computational complexity of O (n2). He also proved that the class of inverse Monge matrices is closed under max-plus algebraic multiplication.

Max plus algebra is being used by many researchers worldwide to model systems and real life problems. Hiroyuki Goto is one of the leading researchers in this field; he has applied max plus algebraic techniques to solve problems in various fields. Goto and

Takahashi [47] devised a methodology which could be used in scheduling applications.

They proposed a concept called cell, which could be used to store longest paths. They propose reducing the complexity of the state equation by imposing constraints on the system parameters [47].

Max plus algebraic techniques are used in scheduling dynamic events like online scheduling [[49], [50]]. Online scheduling is equivalent to rescheduling of the systems as the system parameters, capacity and order values, change with time; therefore, it’s harder to monitor changes in the system. Goto derived two state-space models for the system that can be used to keep track of the state changes, earliest and latest times, and hence efficiently monitor the system [[49], [50]].

Goto and Masuda [51] used max plus algebra to model the behavior of systems by considering no-concurrency with subsequent events. The focus of their research involved repetitive DESs (discrete event systems) with a MIMO (Multiple Inputs and Multiple

Outputs)-FIFO structure. Conventional models deal with no-concurrency of previous events and hence, may not give optimal solution for large number of jobs; therefore, the methodology provided by Goto and Masuda is better in such situations [51]. They also 19 proposed a novel algorithm for deriving a state-space model which determines an optimal control input using constraint matrices and parameter vectors for scheduling problems

[52]. The problems are considered dealt with if the jobs are completed before the due dates, achieving zero tardiness [52].

Goto and Yoshida proposed an algorithm to calculate the state vector of Directed

Acyclic Graphs (DAGs) with a of O(n(n+m)), where n equals number of nodes and m equals number of arcs in the graph [53]. Furthermore, Goto and Yoshida proposed two new algorithms which could reduce the complexity of calculating the kleene star multiplied with a state variable for DAGs. The maximum complexity of the two algorithms is O(m) or O(n2) [75].

Goto and Ichige proposed a high-speed (a reduced average processing time) or average run time computation of kleene star of the weighted adjacency matrix for implementation on a Cell Broadband Engine (CBE) processor [54].

One more interesting scheduling application of max plus algebra occurs in the sub-field of resource conflict detection. Here, the researchers developed a model, using max plus algebra, which detects resource conflict and the overlap of time-lines and workers of the processes; it also solves the conflict by moving the low priority process up in the schedule. This procedure of moving the low priority process is performed by defining an adjacency matrix [55]. Goto used max plus algebraic techniques to develop a novel scheduling method. He proposed that this method works on controlling the in- processing jobs between facilities by putting an upper bound and a lower bound constraint on the system. He used this method in numerically simulating a transportation 20 system with success [56]. Goto and Masuda extended their existing state-space representation, which could account for both capacity and order constraint. They formulated an augmented state-representation which can be used to obtain the earliest start and completion times for processes in the installed facilities [57].

Max plus algebraic model was used in scheduling of cyclically operated high- throughput screening systems in [[48], [70]]. The authors used the max plus algebra to model the system and devised a control strategy to monitor the deviation, from the cyclic behavior, of the system [[48], [70]]. The deviation is monitored at the runtime, which is the difference between the predetermined cycle time and the observed cycle time [[48],

[70]].

Max plus algebra semantics were applied to resource sharing and system scheduling decisions in Synchronous Dataflow Graphs (SDFGs) [68]. The max plus model helps in exploring the tradeoffs between different alternative schedules. SDFGs are applied to fields, such as and multimedia applications in embedded systems [68].

Max plus algebra was applied in modeling a discrete event system which was initially represented as a matrix model [69]. This matrix model was then converted to a max plus model for a given . The matrix model helps better control the system, whereas the max plus model gives a deeper understanding of the dynamics of the system and, therefore, allows for better analysis of it [69] .

One of the real life applications of max plus algebra occurs in the field of railway traffic network scheduling [71]. The delay of a train from the predetermined scheduled 21 time causes a phenomenon called delay propagation. This causes major issues in future schedules and causes the system to exhibit unexpected behavior. The modeling of the delay propagation over the entire network is done for a well-defined max plus algebraic model of the system with its predetermined characteristics [71]. Hiroyuki et. al. [[72],

[73]] used max plus linear representation to model shipbuilding lines and, therefore, tackle the scheduling problems occurring on these lines. They solved the problem of adhering to the due dates (zero tardy jobs) by adjusting the arrival times of parts of materials while scheduling of the system [72]. They also studied the problem of space constraints in the stockyard while scheduling the system; here, they addressed the due date related constraints as well [73].

2.2 Scheduling Techniques

This section focuses on the introduction to some of the existing techniques to solve a scheduling problem. The various techniques used to solve a scheduling problem can be divided into several groups: cyclic systems, three machine systems, enumerative techniques, constructive techniques and iterative techniques.

2.2.1 Scheduling of Cyclic Systems

The sequencing of a cyclic jobshop is a NP hard problem [58] and Hall et. al. proposed a algorithm for this problem. They studied cyclic shop scheduling problems concerning 2 and 3 machine problems and documented the complexities [59].

Specialized petri nets with each node connected to only one input and one output transition with unit weights is called an event graph. An event graph based construction 22 algorithm for the optimization of cyclic production systems was given in [60]. The authors assume the system has infinite buffer capacity and also that the schedule for the bottleneck machine was given, whereas the schedule for the remaining machines was determined by the algorithm [60]. A heuristic algorithm to obtain a schedule for stochastic cyclic manufacturing systems, based on perturbation analysis, with buffer and given cycle time is proposed in [61]. A mathematical model, based on network flows, for cyclic scheduling problems was proposed in [62]. The authors laid emphasis on job shop scheduling problems. They also proposed an enumeration algorithm for scheduling the events for given set of time constraints. Cyclic flow shops with buffers were modeled, while considering the buffers as resources with variable processing times [63]. Matsuo

[64] proposed to solve the two machine no wait (with no buffer) flow shop scheduling problem using polynomial time algorithms.

The cyclic job shop problem with no buffer was solved by Song et al.; this approach gives an optimal schedule which remains the same over every minimum part set

[[65], [66]]. The authors proposed a mixed model, using the petri nets, to solve this problem. Genetic algorithms were used for solving cyclic scheduling problems with the objective of minimizing the work in progress (WIP) [67]. The GA based algorithm minimizes WIP, while the solution schedule will match the cycle time.

The cycle time was computed as the load calculated on the bottleneck machine. The authors only used this algorithm for problems with systems consisting of small number of operations [67]. 23

2.2.2 Three Machine Scheduling

Behnamian et. al. [77] researched ways to minimize the makespan in a three- machine flowshop scheduling problem. The problem considered consisted of a batch processing machine placed between two resources on first stage and third stage respectively. It is an NP hard problem. They proposed a GA and a heuristic algorithm, based on Johnson’s algorithm, to solve this problem [77]. Su and Lin studied the three machine flow shop problem with two operations per job [76]. They proposed methodologies that can be used on three variants of this system [76]. A heuristic algorithm based on Johnson’s algorithm and heuristic gradient method is proposed in

[78]. Allahverdi and Al-Anzi proposed a branch and bound algorithm for a three machine flow shop problem [79]. They used a three phase hybrid heuristic to calculate the upper bound for the problem. The results show that their efficient algorithm can be used on large problems [79]. Wang et. al. studied a three-machine permutation flow shop scheduling problem under simple deterioration. Simple deterioration means that the processing time of jobs is a linear function of its start time [80]. As such Wang et. al. proposed two algorithms: a branch and bound algorithm and a heuristic algorithm. For large problems, the heuristic algorithm performed effectively and was preferred due to time inefficiency in branch and bound [80]. Chen et. al. [81] studied the sequencing problem in a three-machine flow shop to minimize the makespan. They proposed a heuristic algorithm, which is based on the Johnson’s algorithm and gives a maximum computational complexity of O(Nlog N) [81]. Wand et al [82] proposed Genetic

Algorithm (GA) and Simulated Annealing (SA) based algorithms to a three machine flow 24 shop problem. The results obtained showed the dominance of SA over GA for this case

[82]. Strusevich et al [83] proposed a O(Nlog N) algorithm for minimizing the maximum completion time for a partially ordered three machine job shop problem. Su and Chen worked on a three machine problem where the last operation was optional [84] .

2.2.3 Enumerative Techniques

In scheduling, the enumeration methods list all the possible schedules and then choose the optimal solution. This process guarantees an optimal solution [14]. Integer programming, mixed-integer programming and dynamic programming are different types of mathematical techniques widely used to solve smaller scheduling problems. These programming techniques are not used broadly, because of the complexity considerations

[15]. Branch and bound is another enumerative technique, which expands in branches that contain optimal solution. In this way it does not need to enumerate all solutions [14].

Here, different strategies for determining branches are used, like active optimal schedule strategy [16], settling strategy [17] etc. Settling was found to be better than the active schedule creation technique [18]. The techniques introduced above guarantee to give optimal solutions; however, they still cannot find optimal solutions in polynomial time.

There are two types of heuristic algorithms [19]: constructive and iterative algorithms. This classification is based on the requirement of iterations to generate a solution (schedule) [20]. Both, constructive and iterative techniques are described in the sections below. 25

2.2.4 Constructive Algorithms

Constructive heuristics are a category of heuristics which constructs a schedule by adding one operation at a time. Two of the well-known categories of constructive heuristics are:

1. Priority Dispatching Rules – Here, the schedule is generated based on certain rules;

these rules are designed to break the conflict between competing operations in a

schedule resulting in a single solution [1]. Jackson [21] developed a priority based

heuristic for an N X 2 (N jobs, 2 machines) job shop problem. Giffler and Thompson

developed a model using the priority dispatch rule for an N X M Job shop problem

[20]. More than one hundred static and dynamic priority dispatching rules are shown

in a review by Panwalkar and Iskander [22]. Sabuncuoglu and Bayiz proposed a

filtered strategy where the selections are made based on rules such as

“Most Work Remaining” [23].

2. Bottleneck Based Heuristic - Priority dispatch rules get solutions quickly, but they

don’t guarantee even near optimal solution. So, in 1988, Adams et. al. [24] proposed

a technique called Shifting Bottleneck. The shifting bottleneck heuristic divides the

problem into small problems consisting of one machine [24].

2.2.5 Iterative Algorithms

Iterative algorithms, as the name suggests, try to converge to the solution iteratively. The process starts with a known feasible solution, which can be generated using a constructive algorithm or by a random selection [19]. Many iterative algorithms have been used to solve scheduling problems, such as Genetic Algorithms, , 26

Simulated Annealing etc. These algorithms use a similar search technique called local search, which is way of finding solutions or neighbors [25] to the existing schedule(s).

For a detailed explanation of iterative heuristics refer to [26].

Genetic Algorithms have been applied in various problems by researchers; some of the examples include [27], [28], [29], [30] and [31]. The usage of Genetic Algorithms on a problem is explained step-by-step in [104]. Tabu Search has also been applied in various problems by researchers; it has been explained with its usage and some of the known strategies in [10], [11], [12] and [13]. Many researchers have used Simulated

Annealing [[25], [32]] as a technique to obtain better solutions then solutions obtained from other techniques, some of the examples are [33], [34], [35], [36]and [37].

2.3 Calculation of Makespan of a Job Shop

Calculating the makespan of a manufacturing system is equivalent to finding the longest path in a graph, where the vertices of the graph represent the operations and the edges represent the flow of the jobs and machines. A common algorithm used to determine the longest path was developed by Dijkstra [[98], [99]]. This algorithm has a complexity of ( ), where V is the number of nodes in the graph. It is common to assume that the number of operations in a job shop is NM. Thus, calculating the makespan of a job shop using the Dijkstra algorithm has a complexity of ( ). A common enhancement to the Dijkstra algorithm is to implement the sorting using a [100]. This reduces the complexity to ( ( )), where E represents the number of edges. Fredman’s modified algorithm is the fastest known algorithm for calculating the longest (or shortest) path in an arbitrary directed graph. The graph model 27 of job shops [[2], [19]] has 2 edges per node: one modeling the job flow and the other modeling the machine schedules. This makes the complexity of the Fredman’s algorithm

( ( )) ( ( )), when it is applied to the calculation of the makespan. The makespan algorithm in French’s classic book [14] on page 162, is just simply Dijkstra converted to manufacturing terminology. Therefore, it has a complexity of ( ( )), if the set of schedulable operations is stored as a priority queue.

2.4 Missing Areas in Existing Research

The scheduling of a job shop is a NP hard problem. Imaev [2] worked on the modeling of a job shop. He provided a block diagram model using max plus algebra.

Research to find specific structures in job shop problem, for efficient evaluation of performance measures (like makespan) using max plus algebraic techniques, has not been extensive. This work shows that there are certain structures or variations of the problem where the computational complexity of the makespan of a system can be reduced considerably.

One exciting concept, which has not yet been researched, involves the isolation of variant part from the invariant part in a scheduling problem. This special structure provides formulations using the max plus algebraic techniques that has the potential to open a new horizon for looking at a scheduling problem.

28

CHAPTER 3: MAX PLUS ALGEBRA

Max plus algebra is used in modeling a class of discrete event systems [7]. Here, the maximum operator replaces the addition operator, and the addition operator replaces the multiplication operator. In this algebra the maximum operator is denoted by ⊕ and the addition operator is denoted by . As conventional multiplication, the operator is not used redundantly in this work, therefore, if there is no operator between two entities, it means an implied use of max-plus multiplication operator. The additive and multiplicative identity elements are denoted by and e respectively, where, and e = 0. To improve the readability of matrices with many elements the symbol ‘–’ will be used.

Let be a set of real numbers and = U{ }. Let a, b be two scalars which are a part of then,

⊕ ( ) and

Let A be a mxn matrix which belongs to ; then an element of this matrix will be represented either as

or whichever is clearer in the given context. Let A, B be two mxn matrices which belong to

; then the addition of matrices is performed as follows

⊕ = ⊕

Let A and B ; then the multiplication of matrices is performed as follows: 29

⊕ ( )

In this work, the subscripts following the square bracket denote the specified element in

the matrix. The Kleene star operator for an nxn matrix A is defined as

⊕ , where

( ) and , and E is an identity matrix with its diagonal elements equal to e and all other elements are equal to .

A detailed explanation of max plus and its application within discrete event systems can be found in [7]. A divide and conquer approach is used to efficiently calculate the Kleene star operator [8]. The formulation for calculating the Kleene star operator by dividing the matrix in four parts is

[ ]

⊕ ( ⊕ ) ( ⊕ ) [ ] (3-1) ( ⊕ ) ( ⊕ )

3 Using this formulation recursively results in a complexity of O(n ), where n is the number of rows/columns of A for the calculation [8]. The Kleene star operator is used in solving linear equations in max-plus algebra using Theorem 3.1.

Theorem 3.1: Any equation of the following form ⊕ is solved by the equation given that exists [7].

Theorem 3.2: ( (

Proof : L.H.S = ( 30

= ( ⊕ ⊕ ⊕ )B

= B ⊕ ⊕ ⊕

R.H.S = (

= B (E⊕ ⊕ ABAB ⊕…)

= B ⊕ ⊕ ⊕

Therefore, L.H.S = R.H.S □

Theorem 3.3: ( ( where is a permutation matrix, and

.

Proof : R.H.S = ( )

= ⊕ ⊕ ⊕ ⊕

= ⊕ ⊕ ⊕ ⊕

= (

Therefore, L.H.S = R.H.S □ 31

CHAPTER 4: BLOCK DIAGRAM MODELING APPROACH

The work described in this chapter was presented in IIE 2012 [101]. A max plus algebra block diagram based approach for modeling manufacturing systems given by

Imaev [2] is used in this work. Let M and N be the number of jobs and resources available to a manufacturing system. In this work, it is assumed that there is a buffer after each resource (no blocking); therefore, the generic three inputs/outputs blocks in [2] can be simplified to two inputs/output blocks. A generic manufacturing block (with no blocking) is represented in Figure 4.1. The vector variables used in modeling the system are defined below:

 u is the available times of jobs entering the system.

 x is the release times of jobs leaving the system.

 w is the available times of resources.

 z is the completion times of resources.

Figure 4.1. Manufacturing system block

32

A block can represent any kind of system, from a single operation to an entire factory. Figure 4.2 shows a block for a single operation, where the variables are given below:

 [um]n indicates the time at which job n is available to resource m.

 [xm]n indicates the time at which job n leaves resource m.

 [wm]n indicates the time at which resource m is ready to process job n.

 [zm]n indicates the time at which resource m finishes processing job n.

 Pm is a diagonal matrix whose elements are the processing times of the operations

for machine m.

th Since Pm is a diagonal matrix, therefore, instead of referring to the n diagonal element with the notation [Pm]nn, a shorter notation, [Pm]n, will be used throughout this work.

Figure 4.2. Representation of a operation of job n on machine m

The max-plus model for a single operation ( Figure 4.2) on machine m= [1 to M] for part n= [1 to N], is given by 33

= ( ⊕ ) and (4-1)

= , where is the processing time of operation.

When a resource is finished processing a job, it begins the next job (connect

to , vertically) and the finished job moves on to the next resource job

(connect to , horizontally), according to the predefined system constraints and scheduling of jobs. The process of combining blocks is called composition. Matrix equations can be used to model the composite blocks. Further, the composite blocks can be plugged together to form even more complex systems.

4.1 Applying the Block Diagram Approach to a Single Machine

Now a generic example structure is considered that will develop the matrix equation describing the flow of a single machine m processing all of its assigned jobs.

The block diagram structure for the example problem is shown in Figure 4.3, where the given machine flows through the operations vertically.

The representation in figure 4.3 can be redrawn as shown in figure 4.4. The diagram in Figure 4.4 shows the flow of resource variables and job variables through the system using vectors and matrices. The following equations are easily derived by examining Figure 4.3 and Figure 4.4:

( )

34

w 푚

u P x 푚 푚 푚

z w 푚 푚

u푚 P푚 x푚

z푚

w푚 푛

P x u푚 푛 푚 푛 푚 푛

z 푛 푚

Figure 4.3. An example structure

Where the matrix f selects the first operation, the matrix lT selects the last operation, and the matrix N models the connection of the previous operations to the next operation.

Further, Pm is a diagonal matrix containing the processing times:

[ ] , , .

[ ]

35

w

f

N

P

P

u x P

P

lT

z

Figure 4.4. An example structure (flow diagram)

Combining these equations results in

⊕ ⊕ ( ⊕ (

( ⊕ (

These equations can be reformatted in a single matrix system as shown below:

[ ] [ ] ( [ ] (4-2)

36

It is easy to verify using direct multiplication that

( [ ] (4-3)

Equation (4-3) shows that for a given sequence, the calculation of ( can be performed without taking the star of ( and then multiplying the result by .

Rather, it can be calculated recursively just by  (i.e. adding) to the i-1 elements of the i-1 row to form row i. The first row just contains in the first column and  in all other columns. It is easy to see to see that the computational complexity for this approach is O(N2), instead of the O(N3). For a comprehensive explanation of block diagram approach, readers are suggested to refer to [2].

4.2 An Example Problem

This section presents an example problem that demonstrates the application of the block diagram modeling approach. This is a model of one machine that processes three jobs. The processing order and processing times for the respective jobs is given in Table

4.1. The block diagram representation of this system is shown in Figure 4.5.

Table 4.1. Processing order and processing time Processing Order Job1 Job2 Job3 Processing Time 3 5 2

37

w푚

u푚 x푚

z푚 w푚 u x 푚 푚

z푚

w 푚 x u푚 푚

z푚

Figure 4.5. Example problem (single machine structure)

The inputs and outputs for the example scheduling problem are [ ] and

[ ].

Using equation (4-3) for this problem we get ( [ ].

Using equation (4-2), the system outputs and can be mapped to inputs, for the example problem

[ ] [ ] [ ][ ] [[ ] [ ]] [ ]

38

[ ] [ ][ ].

Makespan is the time difference between the machine entering the system and leaving the system. This corresponds to the number that maps to . Therefore, the makespan for this example problem is 10.

39

CHAPTER 5: MODELING THE JOB SHOP SYSTEM

This chapter develops an algorithm to create a model for a complete job shop by connecting block models of individual subsystems. The approach is based on a constructive method which calculates the model of the composition of two subsystems.

Figure 5.1 shows the structure of the system when subsystem is added to a subsystem , where and are the relationships between job arrival and leaving times for subsystem 1 and 2 respectively. This constructive process can be repeated until all the subsystems are added to get the formulation for the entire shop.

퐒푔푐 W 퐆

W퐅 퐒푓푐 퐒 푐푓

Figure 5.1. Block diagram for modeling the composition of two subsystems

The inputs and outputs to both of the systems can be divided into 4 categories which depend on (1) the jobs that only get processed by only, G (2) the jobs that get processed by G first and then F, (3) the jobs that only get processed by only F and, (4) the jobs that get processed by F first and then G. 40

Here, we have

job available times to the composite system.

job departure times from the composite system.

job available times to G.

job departure times from G.

job available times to F.

job departure times from F.

the matrix that selects the job available times that fulfill the criteria (1) and (2) from

paragraph above.

the matrix that selects the job available times that fulfill the criteria (3) and (4) from

paragraph above.

the matrix that maps the outputs from G to inputs to F for criterion (2) from the

above paragraph .

the matrix that maps the outputs from F to inputs to G for criterion (4) from the

above paragraph.

the matrix that maps the outputs from G to .

the matrix that maps the outputs from F to .

The following equations describe the composition of two systems shown in Figure 5.1:

(5-1)

(5-2)

(5-3) 41

(5-4)

(5-5)

Combining (5-1) and (5-2), we get

[ ] [ ] [ ][ ] (5-6)

Combining (5-3) and (5-4), we get

[ ] [ ][ ] (5-7)

Putting (5-6) in (5-7), we get

[ ] [ ][ ] [ ][ ][ ] (5-8)

or

[ ] ([ ][ ]) [ ][ ] (5-9)

Using ( ) ( ) (see Theorem 3.2) in (5-9) , we get

[ ] [ ][ ] [ ] (5-10)

Putting (5-10) in (5-5), we get

[ ] [ ] (5-11)

Using (3-1) for calculating the star in (5-11), we get

( ) ( ) [ ][ ] (5-12) ( ) ( )

(( ) ( ) ( ) ) (5-13) 42

Equation (5-13) provides the relationship between job arrival times and job leaving times for the composite system. By definition, each row and column of the selection matrices have at most one e in each row and column; the remaining elements are all . It is easy to show that can be calculated using the following

th technique: if row i in are all , then the i row in is set to all ; otherwise, the th th j row of of A is copied into the i row of , where j is the column that the e element

th 2 is located in i row of . This multiplication is then just O(N ). Likewise, can be calculated in a similar manner, except the operations are done column-wise.

5.1 Algorithm for Efficient Calculation of

This section presents an efficient algorithm for finding the relationship between job leaving and arrival times of a composite system using (5-13).

43

Algorithm 5.1. Efficient calculation of

Input: , , , , , , and Output: System matrix relating to Step Computation Complexity 2 1 Select rows of corresponding to O(N ) 2 2 Select columns of result from step(1) corresponding to O(N ) 3 to result of step (2) O(N) 2 4 Select rows of corresponding to O(N ) 2 5 Select columns of result from step(4) corresponding to O(N ) 6 to result of step (5) O(N) 2 7 Select rows of corresponding to O(N ) 8 the result of step (7) and step(4) O(N3) 9 Calculate the star of result from step (8) using equation (3-1) O(N3) 2 10 Select columns of result from step(1) corresponding to O(N ) 11 the result of step (3) and O(N3) 12 the result of step (11) and step (9) O(N3) 3 13 the result of step (12) and step (6) O(N ) 14 the result of step (13) and step (10) O(N2)

The overall computational complexity of this algorithm is O(N3).

5.2 Special Cases

This section presents different structures of the composite system, depending on the nature of the job flow between two subsystems.

5.2.1 Generic Flow Shop

Figure 5.2 shows the structure of the composite system, when there is no flow from F to G.

Putting = ε, (5-13) is transformed into

( ( ) ) (5-14)

44

W 퐆

W퐅

Figure 5.2. Modeling the addition of a machine

Note that (5-14)

does not include the ( ) , because there is no recirculation of the jobs in the system as shown by Figure 5.2.

5.2.2 Flow Shop with all Jobs Flowing from G to F

Figure 5.3 shows the structure of the composite system when all the jobs flow from G to F.

W퐆

W 퐅

Figure 5.3. Flow shop with all jobs flowing from G to F 45

Putting and = = = ε, (5-13) is transformed into:

(5-15)

5.2.3 Recirculation of Jobs in G

Figure 5.4 shows the structure of the composite system, when recirculation of jobs occurs in G.

Putting = = ε and = , (5-13) is transformed into

( ( ) )

This can be further reduced to

( ) ) (5-16)

W퐆

W퐅

Figure 5.4. Recirculation of jobs in G

5.2.4 No Interaction between F and G

Figure 5.5 shows the structure of the composite system, when there is no flow of jobs from G to F and vice-versa. 46

W퐆

W퐅

Figure 5.5. No interaction between F and G

Putting ε, (5-13) is transformed into –

( ) (5-17)

5.3 Example Problem

This section presents an example problem comprising 4 jobs and 4 machines. The composition approach will be used to obtain the relationship between and of the system.

Table 5.1. Processing times M1 M2 M3 M4 J1 2 3 4 5 J2 1 2 3 4 J3 5 3 3 4 J4 4 3 3 5

47

Table 5.2. Processing order for jobs J1 M1 M3 M2 M4 J2 M2 M1 M3 M4 J3 M1 M2 M3 M4 J4 M4 M3 M2 M1

Table 5.3. Initial processing order for machines M1 J1 J2 J3 J4 M2 J1 J2 J4 J3 M3 J1 J2 J4 J3 M4 J1 J2 J4 J3

Let us divide the system into four subsystems. Each subsystem consists of all the operations on one machine. Let machine i be described by Fi. Also the system matrix for composition of Fi, Fj and Fk can be represented by Fijk. More formally, we have

∀ 1,2,3 and 4

Composition of Fi, Fj …… FM ∀ [1, M]

[ ] , [ ] ,

[ ] and [ ] (5-18)

For the composition of M1 and machine M2, the selection matrices are:

48

[ ], [ ] [ ],

[ ], and (5-19)

[ ] [ ]

J1M1 J1M1 J3 J1M2 J1M2

J2M2 J2M1

J3M1 J3M2 J2,J4 J4M2 J4M1

Figure 5.6. Subsystem consisting of all operations from M1 and M2

An operation is represented by JiMk, operation of job i on machine k, in Figure 5.6.

Using (5-13) we get the matrix, , which maps job arrival times to the job leaving times to the composite system of M1 and M2 (See Figure 5.6).

(5-20)

[ ] 49

For composition of M1, M2 and M3, the selection matrices are

, ,

[ ] [ ] [ ]

[ ], [ ] [ ] and

J1M1 J1M2 J1,J2,J3 J2M2 J2M3

J3M1 J3M3

J4M3 J4 J4M1

Figure 5.7. Subsystem consisting of all operations from M1, M2 and M3

Using (5-13), we get the matrix, , which maps job arrival times to the job leaving times of the composite system of M1, M2 and M3(See Figure 5.7) 50

[ ].

For the composition of M1, M2, M3 and M4, the selection matrices are

, ,

[ ] [ ] [ ]

[ ], [ ] [ ] and

J1M1 J1M4

J2M2 J1,J2,J3 J2M4

J3M1 J3M4

J4M4 J4 J4M1

Figure 5.8. Subsystem consisting of all operations from M1, M2, M3 and M4

51

Using (5-13), we get the matrix , which maps job arrival times to the job leaving times for the composite system of M1, M2, M3 and M4 (See Figure 5.8). Furthermore, this provides the final system matrix, because maps the job arrival times, , to the job leaving times, , for the entire system.

[ ] (5-21)

Therefore,

[ ]

The makespan of the system can be found out by the maximum number in the final system matrix ( ). Therefore, the makespan for the example problem is 40.

52

CHAPTER 6: BI-PART MODELING OF A JOB SHOP USING MAX PLUS

ALGEBRA

The work done in this chapter was presented in IIE 2012 [101]. This chapter

describes a model of a job shop systems that divides the entire system into two parts: the

invariant sub-system I and the variant sub-system V, as shown in Figure 6.1. It is

assumed that all the perturbations in the schedule occur only in V. Since I maintains a

constant structure, it can be modeled by a series of matrices, which remain constant as the

various scheduling perturbations are applied. This can dramatically reduce the amount of

calculations required to find the makespan of the entire system due to perturbations in V.

w

zi

wv

xi uv xv ui V u x

I

zv

wi

I

z I V Figure 6.1. Division of a manufacturing system into and

53

The basic approach is to divide the system into the invariant I and variant V sub- systems. A heuristic is used to perturb V. The best perturbation is selected. Finally, the search better solutions can continue by dividing the system into a new set of I and V subsystems. The chapter also shows how to calculate the makespan for a special structure of subsystem I. Lastly, it considers a special structure where V consists of operations on a single machine.

Consider the system in Figure 6.1, where the inputs and outputs of the invariant and variant subsystems are distinguished using subscript i and v respectively. Here, is output of the variant subsystem and is the input of the variant subsystem. Therefore,

is a vector which consists of stacked vectors and , and is a matrix which consists of stacked vectors and . Similarly, is an output of the invariant subsystem and is the input to the invariant subsystem.

V is modeled by

(6-1) (12)

where [ ] and [ ].

While I is modeled by

[ ] [ ][ ] (6-2)

where

[ ] and [ ].

Let us assume that all the machines are available at time zero. Also, since the makespan is the difference between the time when the first job enters the system to the 54 time when the last job leaves the system, w and z of the system need not be considered while modeling the system. As shown in Figure 6.1, the outputs of the subsystem I are equal to the inputs of subsystem V and vice-versa. This implies

and . (6-3)

Combing (6-1) – (6-3) results in

( ) and

. or ( ( ) ) . (6-4)

C

I F A B I x u I I

D Figure 6.2. Relationship between the inputI and output of the manufacturing system

The relationship between the outputs and the inputs of the system is represented by equation (6-4). Also, Figure 6.2 gives a pictorial representation of this relationship using a flow diagram [2]. This shows that mapping of vector (times that the jobs leave the system) given (times that jobs enter the system). Assuming all the jobs are available at time zero than we can conclude that maximum element of the matrix ( ( ) ) is the makespan of the system. 55

6.1 Determining the Makespan of a System

Let us assume that all the jobs are available to the system at time zero, i.e.

[ ] (6-5)

In max-plus notation, the makespan (MS) of a system can expressed as

= . (6-6)

Combing (6-4) - (6-6) the MS equation is transformed into

( ( ) ) [ ]

or ( ) (6-7) where

is a row vector, where each element is the maximum element of the corresponding

column of C,

is a column vector, where each element is the maximum element of the

corresponding row of B and

is maximum element of D.

It should be noted that A, , and are all properties of I and remain constant as the schedule on V is perturbed.

6.2 Calculation of , and

The parameters of the invariant subsystem can be calculated by using the modeling methodology presented in chapter 5, where the invariant subsystem is simply the composition of all the machines except for the machine(s) included in the variant. 56

Figure 6.3.Generic structure of I and V

For example, the Figure 6.3 shows the structure of the job shop, with all

operations on machine i (modeled by Fi) are a part of V and all the operations (modeled by composition of F12…i-1 and Fi+1…M ) on the rest of the machines are a part of I.

Algorithm 5.1can be used to find the system matrix for I.

Equation (5-13) and (6-4) present two alternate methodologies to find out the relationship between the job departure times and job arrival times for the entire job shop.

Here, the invariant subsystem is shown by system G, and variant subsystem is shown by system F.

Comparing equation (5-13) and (6-4) we get

( ), , ( ) and (6-8)

Therefore, the parameters of the invariant subsystem , and can be calculated using equation (6-8) and definitions in (6-7).

An example of calculating the makespan of a shop consisting of 3 machines and 3 jobs follows. The data for the example problem is given in Table 6.1, 0 and Table 6.3. 57

The variant consist of all the operations on machine 2 and the rest of the operations are part of the invariant.

Table 6.1. Processing times M1 M2 M3 J1 2 4 5 J2 5 6 2 J3 3 1 4

Table 6.2. Processing order for jobs J1 M1 M2 M3 J2 M1 M2 M3 J3 M1 M3 M2

Table 6.3. Initial processing order for machines M1 J1 J2 J3 M2 J1 J2 J3 M3 J3 J2 J1

Following is the description to calculate the makespan of the example problem consisting of 3 machines and 3 jobs. The data for the example problem is given in Table

6.1, Table 6.2 and Table 6.3. Let the variant consist of all the operations on machine 2; the rest of the operations are part of the invariant.

58

Figure 6.4.(a) Composition of M1 and M3

J1M1 J1M1

J1M3 J3 J1M3

J2M1 J2M1

J2M3 J2M3

J3M1 J3M3

Figure 6.4.(b) Composition diagram of M1 and M3

For composition of M1and M3 (shown in Figure 6.4) the selection matrices are

[ ], [ ] [ ],

59

[ ], [ ] and [ ]

Using (5-13) we get the matrix , which maps job arrival times to the job leaving times of the composite system of M1and M3.

[ ]

Figure 6.5.(a) Composition of M1, M2 and M3

J1M1 J1M3 J3 J2M1 J2M3

J3M1 J3M2

Figure 6.5.(b) Composition diagram of M1, M2 and M3

60

For composition of M1, M2 and M3, (shown in Figure 6.5(b)) the selection matrices are [ ], [ ] [ ],

[ ], [ ] and [ ].

Using (6-8) to calculate the parameters of the invariant we get

[ ], [ ], [ ] and

[ ]

Therefore,

, [ ] and (6-9)

Also, using F (given as F2 for machine 2) and (6-9) we can calculate the makespan of the system using equation (6-7). The makespan of the example system is 21.

The following section presents an analysis of the makespan equation (6-7) .

6.3 Analysis of the Makespan equation of the System – A Special Case

If the makepan equation (6-7) is analyzed carefully, then it can be noted that it has two distinct parts. On careful examination of the equation, the following Lemma can be proved.

Lemma 6.1: If ( ) , then no possible of F can result in a lower makespan than given by . 61

Proof: Different permutations of F can only change the value of ( ) . But the equation (6-7), is bounded by . □

A direct implication of Lemma 6.1 is that if a heuristic finds a schedule (F) for V,

where ( ) then, this is an optimal schedule for the system, for the given I. The search for a schedule for V can be terminated. The physical meaning of

is the maximum time it takes for any job that does not go thru the variant subsystem to finish.

In the next chapter a special structure of subsystem I that considerably simplifies the calculation of the MS is discussed. 62

CHAPTER 7: MODELING OF A JOB SHOP WITHOUT RECIRCULATION OF JOBS

The work done in this chapter was presented in IIE 2012 [101]. This chapter deals with the modeling of a special structure of the invariant system, one that does not allow recirculation. An invariant system has recirculation whenever the arrival time of jobs into

V is dependent on the departure times of jobs from V.

7.1 Makespan Equation for a Job Shop without Recirculation of Jobs

Suppose that does not depend on (using definitions from chapter 5). Then, A must be the “zero” matrix, E, i.e. all its elements are . It is clear from the definition of the Kleene star operator, that E*=E. Then, under this assumption (6-7) simplifies to

. (7-1)

It is an amazing result that if there is no recirculation in I, then all the information

about I is contained by two vectors , and a scalar ( ). So, the calculation of MS once F is known is simply the pre- and post-multiplication of F by two vectors and the addition of a scalar.

7.2 Calculation of F under Scheduling Perturbations for V Containing Single

Machine

Assume that V contains all the operations of a single machine. Now suppose the schedule is perturbed. The new schedule just rearranges how the s are connected to the s. This can be modeled by the insertion of a permutation matrix R in the connection between and as shown in Figure 7.1. A permutation matrix contains exactly one e in each row and column and elsewhere. It is easy to see that if , then b is just a rearrangement of the elements in a. A well-known property of all 63 permutation matrices is that . Figure 7.1 is similar to the system representation in Figure 4.4, modified to model the effects of a perturbed schedule. Here, the operation blocks remain fixed in the natural order of the jobs. The completion time of each machine is rearranged by R that represents the desired schedule. N then sets the start time of the next operation to the completion time of the previous operation. Finally, re-sorts these start times to the operations that remain in their natural order. It should be noted that R can be easily constructed. The e in row i is placed in column j, where j represents the position of operation i in the schedule. For example, given three operations that are to be scheduled in the following order , then the corresponding permutation matrix is given by

[ ].

Then following equations represent the system in Figure 7.1:

( ) (7-2)

(7-3)

(7-4)

(7-5)

Combing (7-1) to (7-5) we obtain

( ) ( ) . (7-6)

( ) ( ) (7-7)

64

f

N

RT R

uv xv

lT

Figure 7.1. Special structure of variant subsystem

Putting this into a matrix form

( ) ( ) [ ] [ ][ ] (7-8) ( ) ( )

It is easy to verify that ( ) (see theorem 3.3), so (7-8) can be rewritten as

[ ] [ ] ( ) [ ] (7-9)

65

Let (this is simply the matrix with the diagonal elements rearranged consistent with the schedule) then, (7-9) becomes

[ ] [ ] ( ) [ ] (7-10)

Notice the similarity between (4-2) and (7-10). Combing (7-10) and (7-1) yields

[ ] ( ) (7-11)

or

[ ] ( ) [ ] . (7-12)

Since ( ) has the same structure as given in equation (4-3) of chapter 4,

therefore, we can calculate ( ) by just putting the values of the processing times of the perturbed schedule in the matrix of the form given in equation (4-3). Also, when there is a perturbation in variant subsystem, the calculation of

[ ] and [ ] can be done by a simple reordering

of and with the last element of added to its second last element; and last element of is added to its first element. This is an exciting result, because it would dramatically reduce the calculations required to compute makespan for this case.

7.3 Algorithm for Efficient Calculation of Makespan under Scheduling

Perturbations for V

An algorithm to efficiently calculate the (7-12) is given in Algorithm 7.1. 66

Algorithm 7.1. Efficient calculation of the MS

Input: Matrices , , , , Output: Makespan (MS) Step Computations Complexity

1 Reorder and  to the last element O(N) 2 Reorder and  to the first element O(N)

3 Reorder the processing times to form O(N) 2 4 Use equation (4-3) to calculate ( ) O(N ) 2 5 Pre-multiply( ) by the reordered O(N ) 6 Multiply the result of step (5) by the reordered O(N) 7 Add to the result of step (5) O(1)

The total complexity of calculating MS is then O(N2).

The effects of several standard scheduling algorithms on the MS can be easily seen by examining equation (7-12). Regardless of the schedule, the elements in the

matrix( ) uniformly increase as the row index increase and uniformly decrease as the column index increase. See equation (4-3). Therefore, it would be good to have the schedule reorder from lowest to highest values. But, is the arrival time of job i into machine m. This indicates that a first-come-first-serve schedule can be advantageous.

In a similar manner, it would be good to have the schedule reorder from highest to lowest values. But, is the remaining process time of job i. This indicates that a greatest-remaining-processing-time schedule can be advantageous. Finally, it is easy to see from (4-3) that the shortest-processing-time schedule would minimize the value of the

elements of ( ) and would also be advantageous.

67

7.4 Example Problem

Let us consider a job shop problem consisting of 3 jobs (J1…J3 and 7 machines

(M1...M7); see Table 7.1 and Table 7.2. The variant system consists of all the operations

th (three operations) on one of the machines, the 5 machine, with

.

Table 7.1. Processing order of jobs / respective processing times J1 M2/1 M1/2 M3/2 M4/2 M5/1 M6/2 M7/1 J2 M1/1 M3/2 M2/3 M4/2 M5/5 M6/2 M7/2

J3 M1/1 M2/2 M3/3 M4/2 M5/2 M6/2 M7/2

Table 7.2. Initial schedule for all the machines

M1 M2 M3 M4 M5 M6 M7 J1 J1 J1 J1 J1 J1 J1 J2 J2 J2 J2 J2 J2 J3 J3 J3 J3 J3 J3 J3 J2

Since the variant subsystem contains only the three operations on the 5th machine,

then is a 7×4 matrix and will be a 1×4 vector. Similarly, is a 4×7 matrix and

will be a 4×1 vector. The size of vectors and is independent of number of machines in the system and only depends on the number of operations in the variant subsystem. The invariant system is described by

, [ ],

68

The makespan for the system is 25. This is calculated by simply using the Algorithm 7.1 with , i.e., no reordering.

Now, suppose we wish to change the schedule to ], then

[ ], [ ], and [ ].

Performing the procedural steps given in Algorithm 7.1, the results are shown in Table

7.3.

Table 7.3. Application of the makespan algorithm

Step Computations Results

1 Reorder and  to the last element

Reorder and  to the first element 2 [ ]

Reorder the processing times to form 3 [ ]

Use equation (4-3) to calculate ( ) 4 [ ]

5 Pre-multiply( ) by the reordered

6 Multiply the result of step (5) by the reordered 30

7 Add to the result of step (5) 30

Therefore, the makespan is equal to 30. Now, if we examine another perturbed schedule , the makespan in this case is 33. So, the original schedule was the best. 69

Similarly, we can calculate the makespan for different perturbations of the variant subsystem and select the best solution (for the given invariant system) and then, proceed to other divisions of the system to obtain a promising (near optimal) solution.

70

CHAPTER 8: MODELING OF JOB SHOP WITH RECIRCULATION OF JOBS

The work done in this chapter was presented in IIE 2013 [102]. This chapter generalizes the work done in chapter 7. It allows for recirculation, where the arrival times of jobs into the variant are dependent on the departure times of jobs from the variant.

One example of recirculation occurs after a job leaves the variant and is processed again by the variant at a later time. A less clear example of recirculation is a scheduling dependency. A scheduling dependency occurs when job a leaves the variant and there exists a job b that is scheduled after job a on a machine in the invariant where b still needs to be processed by the variant. In either case, a delay exists between the jobs exiting and entering the variant. This delay is called the recirculation delay. The recirculation delay is be modeled by a matrix denoted by in (6-7). The structure of A plays an important role in the feasibility of the perturbed schedule. Suppose there is a job

J2 in V that is scheduled to be completed after job J1. Further, assume that o1 is some operation for J1on a machine in I that must wait for an operation o2 that is required by J2.

In other words, J1 is waiting for J2 in I while J2 is waiting for J1 in V. This is called a circular wait condition which causes a deadlock situation or an infeasible schedule.

Figure 8.1 shows the generic structure of the variant sub-system when some jobs recirculate through the variant subsystem. The recirculation is represented by matrix A.

This model also allows that the number of operations processed by the machine are different than the number of jobs. The definition of the variables in the system follows the standards in chapter 6.

71

𝐰푣

f

N

um xm uv S x Q v

lT

A

Figure 8.1.Variant subsystem with recirculation

The subscripts v and m are used to distinguish between job versus operation based variables. For example, represents the jobs entering V, while denotes the inputs to the operations in V. Without any loss of generality, we will assume that in Figure 8.1, the 72 operations will be arranged in the order they are processed. The equations that describe the V, are given below:

( ) (8-1)

(8-2)

(8-3)

(8-4)

(8-5)

(8-6)

Here, matrix maps times that the operations end, , to the times that the jobs are finished with the machine . Likewise, matrix maps the times that the jobs enter V,

, to the start times, , of the operations performed on the machine. Further, Pm is a diagonal matrix containing the processing times of all the operations, and A is the recirculation delay. Note that A is part of I; therefore, it remains constant as the schedule in V is perturbed.

[ ] [ ] , , and .

[ ]

Putting (8-6), (8-2) and (8-3) in (8-1) results in

( ) ( ) (8-7)

Putting (8-7) in (8-4) and (8-5), we obtain

[ ] [ ] ( ) [ ] (8-8)

Putting (8-8) in the makespan equation (6-8), yields

[ ] ( ) (8-9)

73

8.1 Example Problem

An example problem with variant subsystem consisting of 4 jobs (J1, J2, J3 and

J4) on a single machine will be solved using (8-9) in this section. The jobs J3 and J4 recirculate once as shown in Figure 8.2.

𝐰푣

2

𝐰

3 6 z1

𝐰

4 2

𝐰

3 3

𝐰

zm4 1

𝐰

2

Figure 8.2. Example structure of a generic V with recirculation

The variant system consists of 6 operations on the given machine m. The machine available time , job available times , machine release time and job release times are shown in Figure 8.2. The variables to are the times the 4 jobs 74 are initially available to the variant subsystem (machine m). Also, and are the times jobs J3 and J4 are available after recirculation.

, ,

[ ] [ ]

[ ] and

[ ]

Since four jobs enter the variant subsystem, and will be 1×5 and 5×1 vectors respectively. These vectors and are calculated using the approach in section 6.2.

The invariant system is described by , , The

[ ] makespan for the given example problem, calculated using (8-9) is 35.

8.2 Test for Feasibility of a Schedule

This section provides two lemmas which can be jointly used to provide the criteria for feasibility of a proposed schedule.

Lemma 8.1: The product of a diagonal matrix D times a general matrix M results

th in a matrix with each element in the i row of M multiplied by . This multiplication has complexity O(NxM), where N and M are the dimensions of M.

Proof: The proof is obvious using direct . □ 75

Figure 8.3. Partial adjacency graph for

Lemma 8.2: ( ) does not exist if and only if has a non- element in the upper triangular region including the main diagonal.

Proof: A non- element in the upper triangular region including the main diagonal of will ensure that also contains a non-zero element in the identical location (using

Lemma 8.1). has non- elements in all the entries just below the main diagonal.

Given this, a partial adjacency graph for the matrix is shown in Figure 8.1, where the arcs connecting the nodes from top to bottom correspond to the non- elements in , and the arc going up corresponds to the non- element in the upper portion of

. Of course other arcs may exist. Because there is at least one loop in the graph,

( ) does not exist [7]. Now, if A does not have any elements in the upper triangular region, then all arcs in the adjacency graph of only flow

downward. Therefore, a loop cannot exist and ( ) does converge [7]. □ 76

Another way to view this is that any non- element in the upper triangular region of represents a path from the finish time of a current operation to the start time of an earlier operation. This creates a circular wait condition in the system network and hence a deadlock or infeasible schedule situation. A direct implication of Lemma 8.2 is that a quick and easy check for feasibility of a proposed schedule is to look for non- elements in the upper portion of A.

Therefore, for a feasible schedule, has to be lower triangular. The following section provides a constructive approach to compute the star of a lower triangular matrix. This approach will be used afterwards in the efficient calculation

of ( ) . This will help in the efficient calculation of MS using (8-9).

8.3 Computation of the Star of a Lower Triangular Matrix

The constructive approach computing the star of a lower triangular matrix is presented in Algorithm 8.1.

Algorithm 8.1. Efficient calculation of the kleene star of a lower triangular matrix L Input: Lower triangular matrix Output: Star of the given matrix ( ) Step Computations

1 Let , i = 1

2 Let [ ], where  , for

3 Let . If ( ), goto 2, else go to 4. Where n is the dimension of L. 4

Lemma 8.3: If Algorithm 8.1 is followed, then . The complexity of this

( ) algorithm is . 77

Proof. Algorithm 8.1, is just the direct application of (3-1), applied to the successive

( ) upper left sub-matrices of L. See Figure 8.4. In step 2, there are multiplications and

( )

additions. Since, , then the number of multiplications is and the

number addition is for the algorithm, adding results in a total number of

operations equal to . □

𝑙11 𝑙 𝑙 21 12

[𝑙 1 1 𝑙2 2 𝑙1 1 ]

Figure 8.4. Constructive division of a lower triangular matrix

Note that the Algorithm 8.1 will have about of the number of operations to calculate the star of a triangular matrix that (3-1) will have for a general matrix.

The next section develops a model for the case of perturbation in the schedule of variant subsystem with recirculation.

8.4 Applying the Block Diagram Approach for Calculation of F for V Containing

a Single Machine and Jobs going through Recirculation and Perturbation

As shown in chapter 7, a perturbation in the schedule can be modeled by simply inserting a permutation matrix R, as shown in Figure 8.5. We will use a prime to denote

(8-10) 78 for any matrix M. It should also be noted that does not need to be calculated using (8-

10). A more efficient way is to simply exchange appropriate columns and rows as defined by R. Thus, the calculation of requires only 2 operations.

The system equations for the variant sub-system shown in Figure 8.5 are given by

( ) (8-11)

(8-12)

(8-13)

(8-14)

(8-15)

(8-16)

The equations (8-11) to (8-16) can be manipulated in a similar manner as (7-1) to (7-11) to yield

[ ] (( ) ) (8-17)

The matrix Q in term [ ] just expands the dimension of by adding (O – N)

zero elements. Then, reorders the elements of the resulting vector. Finally, adds the

last element of to the vector resulting from the previous operations. Likewise,

performs similar operations on . These manipulations take O(O) operations to perform. 79

f

N

R

um xm uv S x Q v

lT

or A

Figure 8.5. Variant subsystem with recirculation and reordering

The makespan equation, (8-17), will be used to construct an efficient algorithm to calculate the makespan of a perturbed schedule. The next section illustrates the step-by- step procedure of this algorithm. 80

8.5 Algorithm to Calculate the Makespan of a Job Shop System with

Recirculation and Reordering

During a perturbation, the evaluation of makespan, using (8-17), can be done in a very efficient way using Algorithm 8.2. The number of operations in all the steps of

( ) ( ) Algorithm 8.2 is or less, except for the step 7 for which it is , where O is the number of operations on the machine.

Algorithm 8.2. Efficient calculation of the makespan for system with recirculation

Input: Matrices , , , , , , and Output: Makespan (MS) Step Computations

1 Reorder the rows and columns of to create ( ) If ( ) has any non- value in the upper triangular partition, the perturbed 2 schedule is infeasible, then exit.

3 Reorder the first O elements of obtained and  to the last element.

Reorder the first O elements of obtained and  to the first 4 element.

5 Calculate using the procedure in chapter 4.

6 Compute ( )

7 Using Algorithm 8.1, Compute (( ) )

8 Using Lemma 8.1, compute (( ) ) 9 Pre-multiply the result of step (8) with result of step (3) 10 Multiply the result of step (9) with result of step (4) 11 Add to the result of step (10)

8.6 Example Problem

Let us reconsider the example problem presented in section 8.1. The sequence of operations is perturbed to [ , which can be generated by defining 81

[ ]

Table 8.1. Application of algorithm 8.2 Step Computations Results

Reorder the rows and columns of to 1 create ( )

[ ]

If ( ) does not have any non- value in 2 the upper triangular partition, therefore the

perturbed schedule is feasible. [ ] Reorder the first O elements of 3 obtained and  to the last element.

Reorder the first O elements of

4 obtained and  to the first

element.

[ ]

Calculate using the efficient procedure 5 in [2]

[ ]

82

Table 8.1 continued

Add result of step (1) to result of 6 step (5), Compute ( )

[ ]

Using Algorithm 8.1, Compute 7 (( ) )

[ ]

Using Lemma 8.1, compute 8 (( ) )

[ ] Pre-multiply the result of step (8) 9 with result of step (3) 10 Multiply the result of step (9) with 31 the result of step (4) 11 31 Add to the result of step (10)

Table 8.1 shows the step-by-step usage of Algorithm 8.2 on the example problem mentioned. The new makespan of the example problem is 31. Now, if the Algorithm 8.2 is applied on a different perturbation [ , then the makespan obtained is 39. Similarly, we can calculate the makespan for different perturbations of the variant subsystem and select the best solution (for a given invariant system) and proceed to other divisions of the system to obtain a promising (near optimal) solution. 83

CHAPTER 9: HEURISTIC ALGORITHM FOR MINIMIZING MAKESPAN OF JOB

SHOPS WITH RECIRCULATION

This chapter is an extension of the work presented in IIE 2013 [102]. This chapter proposes an alternate method of calculating makespan, for V consisting operations on single machine, with a pictorial representation. This approach is called the SBA ( Serial

Block Addition) method. Equation (6-7), which calculates the makespan of the system, is used to formulate the SBA representation. A diagram used to construct a schedule using

SBA is called the SBAD (Serial Block Addition Diagram). Further, SBA will be modified and a new compressed representation of the schedule CSBAD (Compressed

SBAD) will be developed. The CSBAD will then be used to construct a new scheduling algorithm. The scheduling algorithm, which is the main contribution of the chapter, provides a good schedule, given that the perturbations happen only on a single machine.

In the SBA method, all the jobs can be represented by rows containing three blocks. The following is a short review of the notion developed in previous chapters .The first block contains the element , which corresponds to the maximum possible arrival time of job j to V; the second block contains the processing time of job j in V; and the third block contains the element , which corresponds to the maximum possible remaining time for job j in I. is the recirculation delay, that is, the minimum time required after the finish of block and the start of block; ∀ . These rows are arranged such that none of the blocks can overlap.

84

MS Figure 9.1. SBAD for the general case

9.1 Scheduling Equivalency

This section will prove the equivalency between SBAD and the makespan equation, (6-7). Since only depends on the invariant sub-system, it will not be considered while discussing the scheduling equivalency (See section 6.3). Direct multiplication shows that (6-7) is equivalent to (9-1). It is assumed that the machine is available immediately and leaves the system right after it finishes the last operation in V.

Under these conditions, the last two elements of and are not needed in (9-1) (See section 7.2).

85

[ ]

[ ⊕ ⊕ ⊕ ⊕ ]

⊕ (9-1)

[ ] 86

Theorem 9.1: Every schedule with a corresponding makespan equation shown by

(6-7) can also be represented by an equivalent SBAD.

Proof: It will be proved using mathematical induction that there exists a SBAD representation of a schedule exists where the corresponding makespan is calculated using

(6-7).

Base Case (for one job):

(9-2)

The SBAD for the base case is shown in Figure 9.2. It is clear in both representations that the makespan is just the sum of the three values.

MS Figure 9.2. SBAD for the base case

General Case (for n jobs):

The general case for n jobs is assumed to be true; that is, the SBAD in Figure 9.1 represents equation (9-1). 87

Induction Step (n+1 jobs):

Substituting n+1 into (9-1) yields

⊕ ⊕ ⊕ ⊕

[ ⊕ ⊕ ⊕ ⊕ ]

[ ]

This can be rearranged as [ ]

[ ⊕ ⊕ ⊕ ⊕ ] [ ]

((( ⊕ ⊕ ) ⊕ ( ) ⊕ ( ⊕

) ) ⊕ ) (9-3) 88

The SBAD for the induction step is shown in Figure 9.3.

MS

Figure 9.3. SBAD for the inductive step

As shown by (9-3), when (9 -1) is written out for n+ 1 terms, it can be reduced to a structure similar to (9-1), plus an extra term. The term is composed of factors of the form: ( ⊕ ⊕ ) ,

( ) , , ( ⊕ ) . The length of each of the factors is the sum of the length of block followed by the stack

( ⊕ ⊕ ) and then followed by block .

These factors can be represented in SBAD by just adding a block of length

to the stack with a distance of

∀ , between and . Afterwards, follow this by a block of

length . The last factor in (9 -3), is modeled by simply 89

succeeding the block , with a block of length of . Hence Figure 9.3 is the correct SBAD representation of (9 -3). 

If there is a perturbation in the schedule, the SBAD representation for such system will just be a different order of the blocks, and the new makespan is then simply the maximum length of the new SBAD. It should be noted that the previously given construction procedure will order the jobs from top to bottom in the order that they are processed; however, this is not necessary. Any two rows in a SBAD can be interchanged, without affecting the total length of the SBAD. Two constraints need to be satisfied to generate a feasible schedules and a correct makespan. First, the process blocks must not overlap. The second constraint is that the precedence relationship among the jobs that are represented by matrix A must not be violated. The schedule represented by a SBAD can be extracted by ordering the jobs in the same order that the blocks appears from left to right.

In the next section, we will show that all information required to create a good schedule can be done with a more compressed version of the chart called the CSBAD

(Compressed Serial Block Addition Diagram). Also, the CSBAD representation is used to formulate an efficient algorithm for minimizing the makespan of a job shop with recirculation thru V.

9.2 Scheduling Heuristic for Minimizing the Makespan

The basic approach of the algorithm is to add a job to a partial SBAD while trying to keep the total length of the SBAD to a minimum. When adding a row, the blocks of the jobs cannot overlap. The second objective is that the block is placed after 90

every block ∀  and before every ∀ . The area that satisfies these two constraints is called the feasible range for the block. A gap is defined to be the space between adjacent processing blocks. The key to the algorithm is to place the block of the new row into a gap that exists within the feasible range. The first gap is the space between the leftmost block and the start of the first block. The next gaps are the spaces between and ∀ . Finally, the last gap is the space between rightmost and block. Thus, there will always be one more gap than processing time blocks; however, some gaps can be of zero length. Gaps are labeled as g blocks. Feasible gaps are gaps contained in the feasible region.

……

Figure 9.4. CSBAD for SBAD shown in figure 9.3

The algorithm does not need all the information in a SBAD. It only needs to know where the feasible gaps are. So a compressed form of the SBAD, called the

CSBAD, is developed. Figure 9.4 shows an equivalent CSBAD for SBAD that appears in Figure 9.3. It is just a single row of and g blocks.

The first row in Figure 9.5 is a CSBAD with four processes and five gaps.

Assume that all of the elements in matrix A except and are equal to ε.

The central idea of the algorithm is to add a new job row such that the total length of the

CSBAD is minimized. Minimizing the total CSBAD length is equivalent to minimizing the makespan. 91

Current (3) CSBAD (1) (2) (12) (4) (4) (10) (3) (1)

New Job (3) (8) (7)

1

(1) (8) (4) (3) Option 1 (1) (2) (3) (4) (10) (3) (1)

OR

(1) (2) (12) (4) (8) (3) (1) Option 2 (10) (3)

(0) (0)

OR 2

(2) (4) (3) (8) (3) (2) Option 3 (1) (12) (4) (2)

(0)

Figure 9.5. Algorithm Strategy Description 92

In Figure 9.5, each block in a CSBAD has been labeled using two parts. The first shows the type, , , , or , with the subscript showing the processes the block belongs to. The second part is a number within parenthesis that defines the length

st of the block. For example column labels g1(1) and ( ) can be interpreted as a 1 gap block with length 1 and a processing time block of job 3 with length 2.

Notice that and ; therefore, only the gaps after block and before block in the CSBAD are feasible for placing the new incoming row (job 5).

Therefore, the feasible gaps for the placement of the block are g2(12), g3(4) and g4(10). The rest of Figure 9.5 shows the possibility of an incoming job to be placed in one of these three feasible gaps. When the new row is placed such that is inserted into g2, the resulting CSBAD is shown by option 1 in the Figure 9.5. Notice that g2 is wider than . This provides a degree of freedom in placing the new row. The guideline is to place the row as far from the center of the CSBAD as possible, respecting all constraints and without unnecessarily increasing the length of the CSBAD.

Notice that after placing the new job in gap g2, the structure of g2 changes. Since

= g1+ = 3,

1 in Figure 9.5. Similarly when is placed in g3 or g4, the outcomes are shown by option 2 and option 3. In option 2, the existing gap g3 is not large enough to absorb

. Therefore, it must be increased by 4 before is placed in it. However,

and are absorbed by the rest of the CSBAD. Also note that the new gaps before and after have zero length. In option 3, the existing gap g4 is large enough 93 to absorb and still maintain a gap of between and . However, the sum of existing blocks g 4, and g5 is not large enough to absorb and . So, the length of the CSBAD must be extended. This will result in the blocks g4(0), ( ), g5(2), ( ) and g6(2). Notice that g6 was increased by 1 unit to reflect the increase in the CSBAD due to the length of . In summary, option 1 has an unchanged total length of 40 compared to option 2 and option 3 which have total length of 44 and 41 respectively. Therefore, option 1 is the best outcome among the three possibilities. Hence, the new job should be placed in g2.

This gap analysis forms the heart of the new algorithm. The jobs are placed one at a time in the most strategic gap, and the CSBAD is updated.

The next two sections present two small procedures that support the main algorithm.

9.2.1 Algorithm for Sorting the Jobs According to Precedence

This section provides the algorithm that will be used to calculate the precedence level. A precedence level of 1 indicates that the job is not dependent on any other job. A precedence level of n indicates that the job is dependent on one or more jobs of lower levels, with at least one of these at level n-1. The precedence level will be stored in the vector l, with being the level for job j. The scheduling algorithm, Algorithm 9.1, will place jobs in accordance to their precedence level. This sorting will ensure that the precedence relations among the jobs are never violated and the resulting schedule is always feasible. 94

The information regarding the precedence relationship of jobs is stored in the matrix A. A non-zero element, aij, corresponds to precedence relationship between job i and job j. The element aij is equal to the minimum time required after the finish time of job j before the job i can be processed. Therefore, any column j in A with all  indicates that no jobs depend on j. Likewise, any row i in A with all  indicates that job i does not depend on any other job.

1 2 3 Level = 1

5 4 Level = 2

7 Level = 3 6

8 Level = 4

Figure 9.6. Example problem with 8 jobs

The basic idea of this algorithm will be explained using the concepts of . For the given application, each node in the graph represents a job and each arc 95 represents a precedence constraint between the two nodes of the arc (see Figure 9.6). It is assumed that the graph does not contain any loops, since this would indicate an infeasible schedule

The algorithm is basically a breadth first search, where each layer being labeled with a level number. The detail description of the algorithm follows. All nodes (jobs) are assigned a value , which equals the number of incoming arcs from precedent nodes.

The root nodes (those with =0) are assigned a level = 1 and added to a queue Q, which is used to facilitate the breadth first search. The next node n is removed from Q.

The counter is decremented by one for all the nodes directly dependent to n. After this operation, if any of the dependent nodes i have =0, i.e., node n was the precedent node of largest level, then its assigned level is . Afterwards, i is added to

Q. The entire operation is repeated until Q is empty. Figure 9.6 shows 8 jobs, which are placed in order of increasing level.

Following is the description of all the steps of the Algorithm 9.1. It is apparent that step 1 and 2 terminate after going through all the jobs and elements in A, respectively. Step 3 affirms that each job is entered into the Q once and also taken out once. The number of jobs is finite; therefore, this step terminates when all the jobs are taken off Q. The computational complexity of this algorithm is O(N2). Step 1 just initializes all of the elements of the vectors, l and c. Therefore the complexity of step 1 is

O(N). Step 2 iterates over all the elements of A and assigns the values to elements in the vectors, l and k. So the complexity of step 2 is O (N2). On careful observation of step 3 it can be noted that every job enters Q only once; therefore, the outer while loop runs N 96 times. Also, for each job, the inner loop finds out the jobs dependent on it. For these dependent jobs, it updates the corresponding values in l and c. Therefore, the complexity of step 3 is O(N2). The complexity of step 4 is obviously O(N).

Algorithm 9.1. Determination of levels for all jobs Input: A Output: l Step Description Complexity

1 O(N) A queue Q = null for (i = 1..N) for (j = 1..N) if ( ) 2 2 increment O(N ) if ( )

add i to Q while (Q not empty) set n equal to the next element in Q and remove it for (i = 1..N) if ( ) 3 O(N2) decrement if ( )

add i to Q if there exists any , then i is on a circuit, and a O(N) 4 feasible solution does not exist

9.2.2 Algorithms for Finding beq and ceq

The value represents the time the job j has to spend in the invariant before it enters the variant subsystem. However, this does not take into consideration the possibility of a dependency of j on other jobs in the variant, which are processed before this j. The dependency may delay the arrival of j; therefore, is no longer an accurate 97 representation of the arrival time of job j to the variant. A similar issue exists with , it may no longer accurately represent the remaining processing time. This section develops an algorithm to estimate the arrival times and the remaining processing time that includes the effects of the constraints. The problem is that these times are dependent on the schedule, but they are needed in order to find the schedule. Therefore, the algorithm will be designed to estimate a reasonable lower bound on these values. For any job j, is calculated by finding the maximum of and the minimum amount of time between when job k enters the system and when job j can start, where k

[ ] iterates over all jobs that j depends on. For example, in Figure 9.7 the value of is

[ ] [ ] dependent on job 1. Here the value of is calculated using (9-4). Also, , which depends on jobs 2 and 3, is calculated using (9-5). Note that (9-5) uses

[ ] instead of for calculating the effect of job 2 on .

= max( , + + ) (9-4)

= max( , + + , + + ). (9-5)

Figure 9.7. Estimating the values of vector beq 98

is calculated in a similar manner. For example, in Figure 9.8 the value of

[ ] [ ] is dependent on job 4. Here the value of is calculated using (9-6). Also,

[ ] , which depends on jobs 2 and 3, is calculated using (97 - ). Note that (9-7) uses

[ ] instead of for calculating the effect of job 2 on .

= max ( , + + ). (9-6)

= max ( , + + , + + ). (9-7)

Figure 9.8. Estimating the values of vector ceq

Following are efficient algorithms that utilize (95 - ) and (9-7) to calculate the and for a job shop system. They are based on the same breadth first search technique as in Algorithm 9.1.

99

Algorithm 9.2. Determination of

Input: The number of jobs, N, , and A Output: Step Description Complexity

1 O(N) A queue Q = null for (i = 1..N) for (j = 1..N) if ( ) 2 O(N2) increment if ( ) add i to Q while (Q not empty) set n equal to the next element in Q and remove it for (i = 1..N) if ( ) 3 O(N2) decrement = max( , ain+ + ) if ( ) add i to Q if there exists any , then i is on a circuit, and a O(N) 4 feasible solution does not exist

Step 1: Initializes all of the elements of the vectors l, , and k. Also, initializes a queue

Q equal to null.

Step 2: Assigns the value of for a given job i, equal to number of non-zero elements

th in i row in A. This means the value of is equal to the number of jobs that

have a precedence relationship with job i. All the jobs i with no precedence

constraints, i.e. , are added to Q.

Step 3: For each iteration, a job n is taken out of Q, and all its dependent jobs i are

identified. The counter, , for these jobs is decremented. For each i, is

updated to be the maximum of its current value and the effect of the job n on it. 100

That is, is set to the max( , ain+ + ) . Finally, if ,

the has its final value and i is added to Q. This loop is again repeated until

Q is empty.

Step 4: Checks the feasibility of the schedule. In case there is any job i such that ,

then this indicates the presence of a cycle in the graph or an infeasible schedule.

The algorithm for calculating is similar to Algorithm 9.2, except that it starts at the leaf nodes.

Algorithm 9.3. Determination of

Input: The number of jobs, N, , and A Output: Step Description Complexity

1 O(N) A queue Q = null for (i = 1..N) for (j = 1..N) if ( ) 2 O(N2) increment if ( ) add i to Q while (Q not empty) set n equal to the next element in Q and remove it for (i = 1..N) if ( ) 3 O(N2) decrement = max( , + + ) if ( ) add i to Q if there exists any , then i is on a circuit, and a O(N) 4 feasible solution does not exist

101

Notice that Algorithm 9.2 and Algorithm 9.3 have the same governing breadth first search logic as Algorithm 9.1 for determining levels did. Therefore, the arguments which justify the computational complexity and terminations for Al gorithm 9.1apply for these algorithms too.

9.2.3 Tracking Gaps

The scheduling algorithm uses two vectors to track gaps, s and f. Formally, is the time gap i starts and is the time the gap finishes. The number of gaps is given by g.

9.2.4 Calculating the Feasible Range Properties

Two groups of sets are used by the algorithm to help track the constraints. The set

contains the indices of all the jobs i, which must be processed before job j, i.e. it contains all i such that ≠ε. Likewise, the set contains the indices of all the jobs i that must be processed after the given job j.

The algorithm uses these two variables to calculate the effects of the constraints given in A on job j, as it is being placed. Let vector  store the indices of the

th recommended schedule of jobs already placed before j. Formally,  is index of the k job that scheduled to be processed. The variable minbefore is the minimum amount of time after the first part enters the system , and before job j can start, given all the constraints affecting it. It is given by

minbefore =  (  )

In a similar manner, the variable minafter is the minimum amount of time before the part will leave the system after job j is processed given the constraints. It is given by

minafter =  (  ) 102

The feasible range is then given by

𝑙

9.2.5 Calculating the Length of the CSBAD after Addition of a Row

The introduction of section 9.2 illustrated that the length of the CSBAD after adding a job can vary depending on which gap it is placed in. The algorithm needs a method to estimate the change in the length. The total length of a CSBAD after addition of job j in gap k can be calculated using

𝑙 ( ([ ] )

( )). (9-8)

Equation (9-8) consists of maximizing two terms. The first term, , is the original length of CSBAD. This is inserted to insure that the length of the CSBAD does not decrease, since adding a job will never shorten the length of the CSBAD. The second term is divided into three components. The first component, (

), gives the length of the CSBAD from the start of first gap to the start of processing time block of the new job j. It is the maximum of the length of: the time it takes before the job is ready to be processed by the machine, ; the time between the start of the gap the job is placed in and the time that the first part enters the system,

; and the minimum time needed before job j can start given the constraints imposed by the previously scheduled jobs, . The second component is the processing time of the job, . Finally, the third component,

( ), is the length of time after the job is processed before it 103 leaves the system. It is the maximum of the length of: remaining time the job needs to spend in the system, ; the time between the end of the gap the job is placed in and the time that the last part leaves the system, ; and the minimum time before job j can leave given the constraints imposed by the previously scheduled jobs, .

(1) ( ) (6) ( ) (5)

( ) ( ) ( )

(1) ( ) (2) ( ) (2) ( ) (8)

Total Length (2) Figure 9.9. Total length case 1

Figure 9.9 shows a pictorial representation of an example of the addition of a new job j to an existing CSBAD, where the values for minbefore and minafter are equal to zero. Assuming , the calculation of total length for example shown in Figure 9.8 is

𝑙 ( ( ) ( ))

Figure 9.10 shows the pictorial representation of the addition of a new job to the existing CSBAD in the case where minbefore and minafter are non-zero. Also, notice that

the minbefore and values in this case are large enough to have an impact on the total length of the new CSBAD. Assuming , the calculation of total length for example, shown in Figure 9.10, is 104

( ( ) ( ))

(1) (6) (5) ( ) ( ) 5 10

( ) ( ) ( )

(1) ( ) (2) ( ) (2) ( ) (8)

Total Length

Figure 9.10. Total length case 2

9.2.6 Degrees of Freedom

Three degrees of freedom need to be resolved. The order the jobs are placed in the CSBAD makes a difference in the resulting schedule. If more than one gap is found that minimizes the new length of the CSBAD, then which one is the best? Finally, if the gap is larger than , then where in the gap should the job be placed? Guidelines regarding how to resolve these have been determined experimentally.

In determining the proper order to place jobs, a primary concern is to always maintain a feasible region. This can be guaranteed if the jobs are placed in either ascending or descending levels. If the jobs are placed in ascending order of levels, there will always be a feasible region to the right of the last constraint, i.e., minafter is always

0. Likewise, if the jobs are placed in descending order of levels, there will always be a feasible region to the left of the last constraint, i.e., minbefore is always 0. It was found experimentally to be advantageous to first place as many jobs as possible that do not have 105 dependencies among them. Therefore, the heuristic used in the algorithm to order the placement of jobs is: if the number of jobs at level one exceeds those at the maximum level, jobs should be placed in ascending order. Otherwise, they should be placed in descending order. Jobs that at the same level, they should be placed in order of descending job span as defined by

(9-9)

If multiple gaps are found than the job can be placed in with the same effect on the length of the CSBAD, then which gap should be chosen? After considerable experimentation, the following heuristic was found to give good results. If minafter = 0, the jobs should be shoved to the leftmost gap; if minbefore = 0, the jobs should be shove to the rightmost gap. The reason for this is that the distances between the constraints and the job are minimized. If both minbefore and minbefore are 0, the job should be shoved as far from the center as possible, since this will leave larger gaps near the center where they are more useful. Since jobs are placed in either ascending or descending levels, it is impossible for both minbefore and minafter to be nonzero.

The third degree of freedom occurs once the optimal gap is found. If the gap here is larger than , then where in the gap should it be placed? The heuristic used is to shove the job as far away from the center of the CSBAD as possible. This will leave larger gaps in the center for future jobs.

9.2.7 Algorithm Description

This section presents the algorithm for generating a schedule that is near optimal.

It places jobs one at a time into a CSBAD using the approach presented in the 106 introduction of section 9.2, except [ ] and are used in order to estimate the effects of the dependencies in A on the schedule. It also incorporates the issues discussed in sections 9.2.1-6.

Algorithm 9.4. Scheduling algorithm for minimizing the makespan

Input: The system parameters , , , A, and the sets , . Output:  Step Description Calculate and for all the jobs using Algorithm 9.2 and 1 Algorithm 9.3. 2 Calculate the number of empty rows, erow, and empty columns, ecol, in A. Put the jobs in the set J in following order a. if erow ≥ ecol then in ascending order of the level 3 b. if erow < ecol then in descending order of the level c. place jobs at the same level in descending order of the sum( ). 4 Remove the top job in J. Let j be the index of this job. Put j in ,  Update s and f as follows 5 , , , and These vectors record the start and finish times of the gaps. 6 Let g be the number of gaps, that is, the size of f and s. Initialize g = 2. 7 Remove the next job in J. Let j be the index of this job. Let is the largest index where  . If  then 8 set =0. Let is the smallest index such that  . If  then 9 set =0. 10 Define the set FEASIBLE = ≥ and }.

11 Let minbefore =  (  ).

12 Let minafter =  (  ). Let K be the set of all k in FEASIBLE that minimizes 13 ( ( )

( )).

107

Algorithm 9.4 continued Select , the index of the gap that the operation will be placed in, according to following guidelines

( ( )) a. if minbefore = 0 and minafter = 0, 14 ( ) b. if minafter = 0,

( ) c. if minbefore = 0, .

15 If ( ) for down to , let   . 16 Let  .

( ) If then a. Let ( ( ) 17 ( ) )

b. For down to , let and c. Let ( ), , and ( ).

( ) If then

a. For down to , let and b. Let ( ( ) 18 ( ) )

c. If ( ) for up to , let and d. Let ( ), , and ( ). 19 If ( ) exit. 20 , go to step 7.

Following is the description of all the steps of the Algorithm 9.4.

Step 1: It calculates vectors and for all the jobs using the 0 and 0.

Step 2: It calculates the number of empty rows and columns in A.

Step 3: It sorts the jobs in increasing or decreasing levels based upon the number of

empty rows and columns. Further jobs with equal levels should be sorted

according to decreasing span. These sorted jobs are put in set J. The algorithm

schedules the jobs one at a time in the order they appear in J 108

Steps 4, 5 &6: It uses the vectors s and f to keep track of the start and finish time of each

gap. Gap 1 is initialized to be the time before the first job enters the machine

and the second gap is initialized to the time between the part leaving the

machine and when it exits the entire system.

Step 7: The start of the main loop. j is the current job being scheduled.

Step 8, 9, 10, 11&12: It calculates the set of the feasible gaps, where this job can be

placed. It also calculates the values of minbefore and minafter.

Step 13: It calculates the total length, if job j is scheduled in each feasible gap k. The set

K collects all the gaps that have minimal effect on the length.

Step 14: If there is more than one gap in K, the algorithm will pick one of the gaps

depending on the values of minbefore and minafter. If minbefore and minafter

are both zero then gap farthest from the center of existing CSBAD is chosen. If

minafter = 0, then the gap farthest away from the end of existing CSBAD is

chosen. If minbefore = 0, then the gap farthest away from the beginning of the

existing CSBAD is chosen. The target gap is denoted by .

Step 15&16: Adds job j at the location in .

Step 17&18: Updates s and f. If gap , then the row is shoved as far away from the center of

the CSBAD as it can without affecting the length.

Step 19&20: The algorithm continues until all jobs are scheduled.

9.2.8 Computational Complexity of the Algorithm 9.4

This section provides the computational complexity of the steps of Algorithm 9.4.

Step 1 can be performed in O(N log N), using . Steps 5 to 21 (the loop step) are 109 performed N times. Steps 6, 7, 8, 10 and 11 all have an inner loop that is performed at most N times. Therefore, the computational complexity of the algorithm is O(N2).

9.3 Example Problem

This section solves an example problem using Algorithm 9.4. It consists of 10 jobs. The variant subsystem comprises of all the operations on one of the machines, and the rest is the invariant subsystem. The processing times on the variant subsystem and the relevant parameters of the invariant system is described by (9-10) , which can all be calculated using the methods in section 6.2.

,

[ ] [ ] [ ]

(9-10)

[ ]

The vectors and for this problem after the second step are 110

[ ] [ ]

The example problem is solved by first ordering the jobs in increasing order of level and within the same level in order of decreasing span. The set J becomes

J = {J3 J4 J7 J8 J1 J6 J10 J9 J2 J5}

After completing the first 4 steps of the algorithm, the CSBAD for the first job is constructed as shown in Figure 9.11.

(38) (4) (34)

Figure 9.11. CSBAD after the addition of first job

Figure 9.12 shows the CSBAD after each job is added (gaps of zero length have not been shown in Figure 9.12 due to space constraints). 111

Total S.No CSBAD Length (38) (15) (12) 1 (4) (5) 76

(20) (8) (15) (12) 2 (10) (4) (5) 76

(20) (10) (8) (4) (10) (12) 3 (7) (5) 76

(5) (13) (10) (8) (4) (10) (5) (12) 4 (2) (7) 76

(5) (2) (5) (6) (10) (8) (4) (10) (5) (12) 5 (2) (7) 76

(5) (2) (6) (10) (8) (4) (3) (3) (3) (7) (12) 6 (2) (5) (5) 76

(5) (2) (2) (5) (6) (10) (8) (4) (3) (3) (3) (7) (5) (2) (5) 7 (5) 76

(5) (2) (2) (4) (10) (8) (4) (3) (3) ( (7) (5) (2) (5) (5) 8 (5) (2) 3) 76

(5) (2) (4) (4) (3) (3) (2) 8 (2) (5) (5) (10) (3) (5) (3) (7) (5) (5) (5) 76

Figure 9.12. CSBAD showing optimal schedule for the example problem

112

As can be seen by Figure 9.12 the processing time blocks have been shaded for a better visual distinction of gaps and processing time blocks. In stage 2, job 8 is scheduled in gap 2. On careful observation, it can be noticed that gap 2 is smaller than , but still gap 2 was chosen. This was done because placing in any other gap would have increased the total length of the CSBAD by a larger amount. Stage 3 shows that is placed as far right as possible without increasing the total length. This is done to increase the distance between the center and , so that future jobs can be placed in the new gap 4. Stage 5 shows that gap 5 was used for placing without increasing the total length. So, the strategy in Stage 3 that pushed towards right was successful.

Similarly, all other jobs are placed in the following stages without altering the total length. It is easy to see from the last CSBAD in Figure 9.12 that the final schedule is given by

 = {J1, J6, J2, J7, J5, J3, J10, J4, J8, J9}.

An exhaustive search was done for the given problem. The makespan, 76, calculated using the schedule given by this algorithm, is the optimal makespan for this problem.

9.4 Experimentation

This algorithm was applied to 6,000 randomly generated problems. These problems were divided into 3 sets. The variant has 4, 5 or 6 operations to be scheduled.

Also, the size of matrix A was 4×4, 5×5 or 6×6 respectively. Each set consisted of 2000 problems. The efficacy of the algorithm in finding optimal solutions for small problems 113

can be seen by careful observation of Figure 9.13. It shows that approximately 99.5% of the time the algorithm finds the optimal solution.

6000

5000

4000

3000

2000

Number Solutionsof Number 1000

0 0 (0-1) % Deviation from Optimal

Figure 9.13. Graphical representation of 1st experimentation results

This algorithm was also applied to another set of 10,000 randomly generated problems. In this case the variant has 10 operations to be scheduled. Also, the size of matrix A was 10×10. The efficacy of the algorithm can be seen by careful observation of

Figure 9.14. It shows that approximately 95 % of the time the algorithm finds the solution within 5% deviation of the optimal solution.

This algorithm was applied to random set of problems where variant consisted of

100, 500 and 1000 operations. It was observed that the solutions to these problems were obtained within 4 seconds. This speed corroborates with the O(N2) complexity of the algorithm. 114

8000

7000 6000 5000 4000 3000 2000

Number SolutionsOf Number 1000 0 0 (0 - 3) (3 - 5) (5 - 10) Above 10 % Deviation from Optimal

Figure 9.14. Graphical representation of 2nd experimentation results

115

CHAPTER 10: SUMMARY

This chapter summarizes the findings in this work along with a brief review of the entire research. Also, it provides areas of future research which can be done to progress this work.

10.1 Summary of Key Points in this Research

This research is focused on the efficient modeling and analysis of job shop systems using max plus algebraic techniques. The block diagram approach given by

Imaev [2] was used for the to model the systems. The following summarized the key contributions of this research:

1. Efficient modeling of operations on a single machine is described in chapter 4. The

computational complexity of calculating the system matrix, the matrix that maps the

relationship between job leaving times to job arrival times, for a single machine is

O(N2).

2. Chapter 5 describes a technique to combine two sub systems. An efficient algorithm

based on the system equation, the matrix that maps the relationship between job

leaving times to job arrival times, of the composition of two subsystem is described.

This algorithm can be used constructively to model the entire job shop. The

computational complexity of algorithm is O(N3).

3. Chapter 6 describes the novel bi-part system modeling approach. The system is

divided into two parts, variant and invariant. Only the variant goes through changes

during perturbation in the system, while the invariant remains unchanged. Also the

algorithm presented in chapter 5 is used to calculate the parameters of the invariant 116

subsystem. Chapter 5 also analyses and derives the matrix that maps the relationship

between job leaving times to job arrival times.

4. Chapter 7 specializes the model presented in chapter 6. The variant consists of all the

operations on a single machine and the invariant consists of all the remaining

operations. Further, it presents an efficient algorithm to find out the makespan of a

system when there is perturbation in the variant subsystem and no recirculation of

jobs in the variant. The computational complexity of this algorithm is O(N2). This is

much better than the complexity of calculating the makespan using the Fredman’s

algorithm ( ( )).

5. Chapter 8 presents a model that allows recirculation of jobs in the variant.

Additionally, it shows a criterion to find out the feasibility of the schedule when there

is perturbation in the variant. It provides an algorithm to calculate star of a lower

( ) triangular matrix with a computational complexity of which is of the

computational complexity of traditional calculation of star using (3-1). It also presents

an efficient algorithm to calculate the makespan of this system in computational

( ) complexity of , where O is the number of operations on the machine.

6. Chapter 9 derives a scheduling algorithm for variant consisting of one. The

computational complexity of this algorithm is O(N2). Experimentation was performed

on 10,000 randomly generated problems consisting of 10 jobs. The algorithm, on

approximately 95% instances, provided a solution within 5% of the optimal solution. 117

10.2 Future Work

Initial investigation has shown that combining graph theory with max plus algebra, may improve the efficiency of the algorithms presented in chapters 7 and 8 even further. In addition, it is believed that the composition of subsystems, work in chapter 5, may also be improved using these techniques. The constructive nature and efficacy of the scheduling algorithm presented in chapter 9 may be further exploited by applying it in a

Flexible Manufacturing System. Also certain variants to the conventional scheduling systems like, preemptive scheduling, probabilistic processing times etc. may be modeled using the existing approaches. The scheduling algorithm may also be further extended to incorporate these changes. Future work would also include integrating the algorithm presented in chapter 9 into system optimization algorithms, such as the shifting bottleneck method.

118

REFERENCES

[1]. K.R. Baker, Sequencing and Scheduling, John Wiley and Sons, Inc., New York (USA), 1974, ISBN: 0-471-4555-1. [2]. A. Imaev, Hierarchical modeling of manufacturing systems using Max-Plus Algebra, Ph.D. Dissertation, Ohio University, 2009. [3]. M. Dell'Amico and M. Trubian, “Applying tabu search to the job-shop scheduling problem”, Annals of Operations Research, Vol. 41, pp. 231-252, 1993. [4]. T. Yamada et al. “A simulated annealing approach to job-shop scheduling using critical block transition operators”, IEEE IC '94 International Conference on Neural Networks, Vol. 6, pp. 4688-4692, 1994. [5]. R. a kano and T.Yamada, “Conventional genetic algorithms for job shop problems”, Proceedings of the 4th International Conference on Genetic Algorithms, pp. 474-479, 1991. [6]. Marco Dorigo and Thomas Stützle, Ant Colony Optimization, Massachusetts Institute of Technology, 2004, ISBN: 0-262-04219-3. [7]. B. Heidergott et al., Max Plus at Work - Modeling and Analysis of Synchronized Systems: A Course on Max-Plus Algebra and Its Applications, Eds. Princeton, NJ: Princeton Univ. Press, 2007. [8]. G. Cohen, P. Moller, J. Quadrat, and M. Viot. “Algebraic tools for the performance evaluation of discrete event systems”, IEEE Proceedings: Special issue on Discrete Event Systems, Vol. 77(1), Jan. 1989. [9]. Phanindher R. Patlola, Efficient Evaluation of Makespan for a Manufacturing System Using Max-Plus Algebra, MS Thesis, Ohio University, 2011. [10]. F. Glover, J. P. Kelly and M. Laguna, Genetic Algorithms and Tabu Search: Hybrids for Optimization, Computers and Operations Research, Vol. 22, No. 1, pp. 111 – 134, 1995. [11]. F.S. Hillier and G.J. Lieberman, Introduction to Operations Research, New York, NY: McGraw-Hill. 8th Ed, 2005. 119

[12].C.R. Reeves, Modern Heuristic Techniques for Combinatorial Problems, John Wiley & Sons, Inc, 1993. [13].D.T. Pham and D. Karaboga, Intelligent Optimisation Techniques – Genetic Algorithms, Tabu Search, Simulated Annealing and Neural Networks, London: Springer-Verlag, 2000. [14].S. French, Sequencing and Scheduling: An Introduction to the Mathematics of the Job-Shop, Chichester, England:Ellis Horwood Lts., 1982. [15]. Albert Jones and Luis C. Rabelo.(2000,Jan 19). Survey of Job Shop Scheduling Techniques[Online]. Available: http://www.mel.nist.gov/msidlibrary/doc/jobshop1.pdf [16]. B. Giffler and G. L. Thompson, Algorithms for solving production-scheduling problems, Operations Research, Vol. 8, pp. 488-503, 1960. [17]. B.J.Lageweg et al., _Job-shop scheduling by implicit enumeration, Management Science, Vol. 24, pp. 441-450, 1978. [18]. J.R.Baker and G.B McMahon, Scheduling the general job-shop, Management Science, Vol. 31, pp. 594-598, 1985. [19]. M.Pinedo, Scheduling: Theory, Algorithms, and Systems, Englewood Cliffs, N.J.: Prentice Hall, 1995. [20]. A.S.Jain and S.Meeran, Deterministic job-shop scheduling: Past, present and future, European Journal of Operational Research, Vol. 113, pp. 390-434, 1999. [21]. Andrew Kusiak, Computational Intelligence in Design and Manufacturing, New York: John Wiley & Sons, 2000. [22]. S. S. Panwalkar and Wa_k Iskander, A survey of scheduling rules, Operations Research, Vol. 25, pp. 45-61, 1978. [23]. I.Sabuncuoglu and M.Bayiz, Job shop scheduling with beam search, European Journal of Operational Research, Vol., 118, pp.390-412, 1999. [24]. J.Adams et al., The shifting bottleneck procedure for job shop scheduling, Management Science, Vol. 34, pp. 391-401, 1988. 120

[25]. R.J.M.Vaessens et al., Job shop scheduling by local search, INFORMS Journal on Computing, Vol. 8, pp. 302-317, 1997. [26]. S. Voss, I. H. Osman and C. Roucairol, Meta-Heuristics: Advances and Trends in Local Search Paradigms for Optimization, Kluwer Academic Publishers Norwell, MA, USA, 1999. [27]. J. Bean, Genetic algorithms and random keys for sequencing and optimization, ORSA J. Computing, pp. 154-160, 1994. [28].C. Bierwirth, A generalized permutation approach to job-shop scheduling with genetic algorithms, OR Spektrum, pp. 88-92, 1995. [29].G. Shi, _A applied to a classic job-shop scheduling problem, International Journal of Systems Science, pp. 25-32, 1998. [30]. T. Yamada and R. Nakano, A genetic algorithm with multi-step crossover for job- shop scheduling problems, GALESIA'95 Proceedings of the Int. Conf. on GAs in Eng. Sys., pp. 147-151, 1995. [31]. D. C. Mattfeld, _Evolutionary search and the job shop: Investigations on genetic algorithms for production scheduling, Physica-Verlag, Heidelberg, Germany, 1997. [32]. E.H.L.Aarts, A computational study of local search algorithms for job shop scheduling, ORSA Journal on Computing, Vol. 6, pp. 118-125, 1994. [33]. P.J.M. Van Laarhooven et al. _Job shop scheduling by simulated annealing, Operations Research, pp. 113-125, 1992. [34]. T. Yamada et al. _A simulated annealing approach to job-shop scheduling using critical block transition operators_, IEEE ICNN'94 International Conference on Neural Networks, Vol. 6, pp. 4688-4692, 1994. [35]. T. Yamada et al., Job-shop scheduling by simulated annealing combined with deterministic local search, MIC'95 Meta-heuristics International Conference, pp.344- 349, 1995. [36]. H. Matsuo et al. A controlled search simulated annealing method for the general job-shop scheduling problem, Working Paper #03-04-88, Graduate School of Business, The University of Texas at Austin, 1988. 121

[37]. M. Kolonko, Some new results on simulated annealing applied to the job shop scheduling problem, European Journal of Operational Research, pp. 123-136, 1999. [38]. Arun Nambiar, Mathematical Formulation and Scheduling Heuristics for Cyclic Permutation Flow-Shops, Ph.D. Dissertation, Ohio University, 2008. [39]. G. Cohen, S. Gaubert, and J. Quadrat. Max-plus algebra and system theory: Where we are and where to go now. Elsevier Annu. Rev. Control, 23:207–219, 1999. [40]. Tae-Eog Lee. Stable earliest starting schedules for cyclic job shops: A linear system approach. International Journal of Flexible Manufacturing Systems, 12(1):59–80, 2000. [41]. F. Baccelli, G. Cohen, G.J. Olsder, and J.P. Quadrat. Synchronization and Linearity. John Wiley and Sons, West Sussex, England, 1992. [42]. B. Heidergott. A characterisation of (max,+)-linear queueing systems. Queueing Systems, 35(1–4):237–262, 2000. [43]. N. Krivulin. A max-algebra approach to modeling and simulation of tandem queueing systems. Mathematical and Computer Modelling, 22:25–37, 1995. [44]. N. Krivulin. The max-plus algebra approach in modelling of queueing networks. In Summer Computer Simulation Conference, pages 485–490, Portland, OR, July 1997. [45]. A. Doustmohammadi. Modeling and analysis of production systems. PhD thesis, Georgia Institute of Technology, December 1995. [46]. A Doustmohammadi and E.W. Kamen. Direct generation of event-timing equations for generalized flow shop systems. In Proc. SPIE on Modeling, Simulation, and Control Technologies for Manufacturing, volume 2596, pages 50–62, November 1995. [47]. H. Goto and H. Takahashi. Efficient representation of the state equation in max- plus linear systems with interval constrained parameters. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, v E95-A, n 2, pages 608-612, February 2012. 122

[48]. T. Brunsch, J. Raisch and L. Hardouin. Modeling and control of high-throughput screening systems. Control Engineering Practice, v 20, n 1, pages 14-23, January 2012. [49]. H. Goto. Dual representation and its online scheduling method for event-varying DESs with capacity constraints. International Journal of Control, Vol. 81, Iss. 4, 2008. [50]. H. Goto. Online Scheduling for Event-Varying MPL Systems with Finite Buffer Size. International Journal of , Vol. 2, No. 2, pages 168-183, 2008. [51]. H. Goto and S. Masuda. Dynamic backward scheduling method for max-plus linear systems with a repetitive, MIMO, FIFO structure. Journal WSEAS Transactions on Information Science and Applications, Volume 5, 182-191, Issue 2, February 2008. [52]. H. Goto and S. Masuda. Derivation Algorithm of State-Space Equation for Production Systems Based on Max-Plus Algebra. IEMS Vol. 3, No. 1, pp. 1-11, April 2004. [53]. H. Goto, S. Yoshida. Fast Computation Methods for the Kleene Star in Max-Plus Linear Systems with a DAG Structure. IEICETransactions on Fundamentals, Vol.E92-A, No.11, 2009, pp. 2794—2799. [54]. H. Goto and T. Ichige. High-speed computation of the Kleene star in max-plus algebra using a cell broadband engine. ACE'10 Proceedings of the 9th WSEAS international conference on Applications of computer engineering. Pages 69– 74, 2010, ISBN: 978-960-474-167-3. [55]. S. Yoshida, H. Takahashi and H. Goto. Resolution of resource conflict in a max- plus linear representation - Case of a single project. IEEE International Conference on Industrial Engineering and Engineering Management, p 1715-1719, 2011. [56]. H. Goto. Brief Paper: Robust MPL scheduling considering the number of in-process jobs, Engineering Applications of Artificial Intelligence, Volume 22, Issue 4-5, Pages 603 – 607, June, 2009. 123

[57]. H. Goto and S.Masuda. Consideration of Capacity and Order Constraints for Event- Varying MPL Systems, IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, Volume E90-A Issue 9, Pages 2024 – 2028, September 2008. [58]. N. G. Hall, T. E. Lee, and M .E. Posner. The complexity of cyclic shop scheduling problems. Journal of Scheduling, 5(4):307–327, 2002. [59]. C. Hanen. Study of np-hard cyclic scheduling problem:the recurrent job-shop. European Journal of Operational Research, 72:82–101, 1994. [60]. F. Chauvet, J. W. Herrmann, and J-M. Proth. Optimization of cyclic production systems: A heuristic approach. IEEE Transactions on Robotics and Automation, 19(1):150–154, 2003. [61]. N. Sauer. Optimization of cyclic manufacturing systems with stochastic manufacturing times using event graphs. International Journal of Production Economics, 46:387–399, 1997. [62]. P. Serafini and W. Ukovich. A mathematical model for periodic scheduling problems. SIAM Journal of Discrete Mathematics, 2(4):550–581, 1989. [63]. S. T. McCormick, M. L. Pinedo, S. Shenker, and B. Wolf. Sequencing in an assembly line with blocking to minimize cycle time. Operations Research, 37:925– 935, 1989. [64]. H. Matsuo. Cyclic sequencing problems in the two-machine permutation flow shop: complexity, worst-case, and average-case analysis. Naval Research Logistics, 37:679–690, 1990. [65]. J. S Song and T. E. Lee. Steady state analysis of cyclic shops with blocking. Technical Report IE97-23, Department of Industrial Engineering, KAIST, Korea, 1997. [66]. J. S. Song and T. E. Lee. Petri net modeling and scheduling for cyclic job shops with blocking. Computers and Industrial Engineering, 34(2):281–295, 1998. [67]. T. Hsu, O. Korbaa, R. Dupas, and G. Goncalves. Genetic algorithm for f.m.s cyclic scheduling. In 4th Confrence Francophone de MOdlisation et SIMulation, 124

Organisation et Conduite d’Activits dans l’Industrie et les Services, MOSIM’03 du 23 au 25 avril,Toulouse, France, 2003. [68]. Yang Yang, Marc Geilen, Twan Basten, Sander Stuijk and Henk Corporaal. Iteration-based trade-off analysis of resource-aware SDF. Proceedings - 2011 14th Euromicro Conference on Digital System Design: Architectures, Methods and Tools, DSD 2011, p 568-574, 2011. [69]. Tamara Petrovic and Stjepan Bogdan. Matrix-based sequencing in multiple re- entrant flowlines. Transactions of the Institute of Measurement and Control, v 33, n 3-4, p 359-385, May-June 2011. [70]. T. Brunsch and J. Raisch. Modeling and control of high-throughput screening systems in a max-plus algebraic setting. Engineering Applications of Artificial Intelligence, 2010. [71]. Rob M.P. Goverde. A delay propagation algorithm for large-scale railway traffic networks. Transportation Research Part C: Emerging Technologies, v 18, n 3, p 269- 287, June 2010. [72]. Kajiwara Hiroyuki, Yasuhiro Hitoi and Youichi Nakao. On scheduling a shipbuilding line based on Max-Plus system dynamic representation. ICCAS-SICE 2009 - ICROS-SICE International Joint Conference 2009, Proceedings, p 1738-1741, 2009. [73]. Kajiwara Hiroyuki, Yasuhiro Hitoi and Youichi Nakao. Max-plus algebra based scheduling of a ship building line. RINA, Royal Institution of Naval Architects - International Conference on Computer Applications in Shipbuilding, ICCAS, v 1, p 461-499, 2009. [74]. M. Zhou and K. Venkatesh. Modeling, Simulation, and Control of Flexible Manufacturing Systems: A Petri Net Approach. World Scientific, 1999. [75]. H. Goto, S. Yoshida. A fast computation of the state vector in a class of DES system. ACE'10 Proceedings of the 9th WSEAS international conference on Applications of computer engineering. Pages 75 – 80, 2010, ISBN: 978-960-474-167- 3. 125

[76]. Ling-Huey Su and Cheng-Te Lin. Three-machine flowshop with two operations per job to minimize makespan. Computers & Industrial Engineering, 50, 286–295, 2007. [77]. J. Behnamian, S.M.T. Fatemi Ghomi, F. Jolai and O. Amirtaheri. Minimizing makespan on a three-machine flowshop batch scheduling problem with transportation using genetic algorithm. Applied Soft Computing, 12, 768–777, 2012. [78]. Richard J. Giglio and Harvey M. Wagner.Approximate solutions to the three- machine scheduling problem. Operations Research, Vol. 12, No. 2, pp. 305-324 1964. [79].[79]. Ali Allahverdi and Fawaz S. Al-Anzi. A branch-and-bound algorithm for three- machine flowshop scheduling problem to minimize total completion time with separate setup times. European Journal of Operational Research, 169, 767–780, 2007. [80]. Ling Wang, Lin-YanSun, Lin-HuiSun and Ji-BoWang. On three-machine flow shop scheduling with deteriorating jobs. Int. J. Production Economics, 125, 185–189, 2010. [81]. Bo Chen, Celia A. Glass, Chris N. Potts and Vitaly A. Strusevich. A New Heuristic for Three-Machine Flow Shop Scheduling. Operations Research, Vol. 44, No. 6, pp. 891-898, 1997. [82]. Tai-Yue Wang, Yih-Hwang Yang and Hern-Jiang Lin. Comparison of Scheduling Efficiency in two/three-machine no-wait flow shop problem using Simulated Annealing and Genetic Algorithm. Asia-Pacific Journal of Operational Research Vol. 23, No. 1, 41-59, 2007. [83]. V. A. Strusevich, I. G. Drobouchevitch and N. V. Shakhlevich. Three-machine shop scheduling with partially ordered processing routes. Journal of the Operational Research Society, 53, 574-582, 2002. [84]. Ling-Huey Sua and James C. Chen. Two- and three-machine flowshop scheduling problems with optional final operation. Journal of the Chinese Institute of Industrial Engineers, Vol. 28, No. 1, 55–71, 2011. [85]. M. Gavalec and J. Pl´avka. Computing an eigenvector of a monge matrix in max- plus algebra. Mathematical Methods of Operations Research, 63(3):543–551, 2007. 126

[86]. R.M. Karp. A characterization of the minimum cycle mean in a digraph. Discrete Mathematics, 23, 309–311, 1978. [87]. R.A. Cuninghame Green. Algebra. Lect. Notes Econ. Math. Syst. 167.Springer, New York, 1979. [88]. R.A Cuninghame-Green. Minimax algebra and applications. Advances in Imaging and Electron Physics, 90, 1995. [89]. M. Pinedo, X. Chao, J. Leung, A. Feldman, N. Asadathorn, S. Kreipl, M. Singer, A. Vazacopoulos and Y. Yang. (2002) Lekin Software available at http://community.stern.nyu.edu/om/software/lekin/ [90]. R. Qing-dao-er-ji, Y. Wang. A new hybrid genetic algorithm for jobshop scheduling problem. Computers and Operations Research, 39, 2291 – 2299, 2012. [91]. J. Yang and L. Sun. Clonal selection based for jobshop scheduling problems. Journal of Bionic Engineering, 5, 111–119, 2008. [92]. J. F. Goncalves and J. J. D. M. Mendes and M. G. C. Resende. A hybrid genetic algorithm for the jobshop scheduling problem. European Journal of Operational Research,167, 77–95, 2005. [93]. B. M. Ombuki and M. V. Entresca. Local search genetic algorithms for the jobshop scheduling problem. Applied Intelligence, 21, 99–109, 2004. [94]. C. Coello, D. Rivera and N. Cortez. Use of an artificial immune system for jobshop scheduling. Artificial immune systems: Proceedings of the ICARIS, 1–10, 2003. [95]. S. Binato, W. J. Hery, D. M. Loewenstern and M. G. C. Resende. A GRASP for jobshop scheduling. Essays and Surveys in , 59–79, 2002. [96]. I. Sabuncuoglu and M. Bayiz. Jobshop scheduling with beam search. European Journal of Operational Research,118(2), 390–412, 1999. [97]. L. Wang and D. Zheng. An effective hybrid optimisation strategy for job-shop scheduling problems. Computers & Operations Research,28(6), 585–96, 2001. [98]. T. H. Cormen, C. E. Leiserson, R. L. Rivest and C. Stein, (2001). "Section 24.3: Dijkstra's algorithm". Introduction to Algorithms (Second ed.). MIT Press and McGraw–Hill. 127

[99]. E. W. Dijkstra, (1959). A note on two problems in connexion with graphs. Numerische Mathematik 1, 269–271, 1959. [100]. M. L. Fredman and R. E. Tarjan. Fibonacci heaps and their uses in improved network optimization algorithms. 25th Annual Symposium on Foundations of . IEEE. pp. 338–346, 1984. [101]. M. Singh and R. P. Judd . Efficient Performance Evaluation of a Job Shop Using Max-Plus Algebra. Proceedings of the 2012 Industrial and Systems Engineering Research Conference G. Lim and J.W. Herrmann, eds., Reno, Nevada, USA, 2012. [102]. M. Singh and R. P. Judd . Efficient Modeling of a Job Shop with Recirculation of Jobs. Proceedings of the 2013 Industrial and Systems Engineering Research Conference A. Krishnamurthy and W.K.V. Chan, eds., Puerto Rico, USA, 2013. [103]. M. Singh and R. P. Judd . Heuristic Algorithm for Minimizing Makespan of Job Shops. Proceedings of the 2013 Industrial and Systems Engineering Research Conference A. Krishnamurthy and W.K.V. Chan, eds., Puerto Rico, USA, 2013. [104]. M. Mitchell, An Introduction to Genetic Algorithms (Complex Adaptive Systems), First MIT paperback edition, ISBN-10: 0262631857, 1998.

! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !

Thesis and Dissertation Services ! !