PATTERN-RECOGNITION

A Thesis presented to

The Faculty of the

Fritz J. and Dolores H. Russ College of Engineering and Technology

Ohio University

In Partial Fulfillment

of the Requirements of the Degree

Master of Science

-, .

? X.' by 1 ; , 5.2:

Xiaoqiang Yao

November, 1996 Table of Contents

Abstract

Chapter 1 Introduction

1.1 Overview

1.2 Research Objective and Development

1.3 Thesis Organization

Chapter 2 Literature Review

2.1 Mathematical Programing and Analytical Models

2.2 Heuristics, Dispatching Rules and Digital Computer Simulation

2.3 Based Methodologies

2.3.1 Knowledge-based Expert Systems

2.3.2 FuqLogic

2.3.3

2.3.4 Genetic

2.3.5 Artificial Neural Networks

2.4 Pattern Recognition and Neural Networks

Chapter 3 System Architecture

3.1 System Overview

3.1.1 Assessment Module

3.1.2 Data Preprocessing Module

3.1.3 Pattern Recognition/Decision MakingIOptimization Module 3.2 Dispatching Rules and Performance Measures

3.2.1 Definition of Symbols and Terms

3.2.2 Definition of Selected Dispatching Rules

3.2.3 Definition of Selected Performance Measures

3.3 Backpropagation Paradigm

3.4 Genetic Algorithms as Optimizers

3.5 Summary

Chapter 4 Case Study 1: Single-machine Scheduling Problem

4.1 Description of the Problem

4.2 Data Acquisition and Analysis

4.2.1 Job Data

4.2.2 Performance Measures

4.2.3 Scheduling Rules

4.3 Data Preparation for Pattern recognition Neural Networks

4.4 Implementation of the Expert neural Network Rule Selector

4.5 Optimization with Genetic Algorithms

4.6 Analysis of the Results

Chapter 5 Case Study 2: Multiple-machine Scheduling Problem

5.1 Description of the Problem

5.2 Data Acquisition and Analysis

5.2.1 Job Data

5.2.2 Performance Measures 5.2.3 Scheduling Rules

5.3 Data Preprocess for Neural Networks

5.4 Implementation of the Network Scheduler

5.5 Analysis of the Results

Chapter 6 Conclusions

6.1 Conclusions

6.1 Issues and Future Works

References

Appendix A Program Lists

Appendix B Sample Test Data for Single-machine problem

Appendix C Sample Test Data for Multiple-machine problem List of Tables

Table 3- 1 GA - population initialization 42

Table 3-2 GA - reproduction of superior strings 43

Table 3-3 GA - New population after one generation 44

Table 3-4 GA - reproduction of superior strings 44

Table 3-5 GA - GA - results after two generations 4 5

Table 4- 1 Process plans 49

Table 4-2 Setup time matrix 50

Table 4-3 A sample job batch 50

Table 4-4 Performance measure groups 52

Table 4-5 Neural network configuration and training specilications 56

Table 4-6 Simulation results of 15 scheduling rules for the example 57

Table 4-7 Results of 7 neural network rule selectors for the example 57

Table 4-8 Results of the genetic optimizer for the example 60

Table 4-9 Comparison of the overall performance among scheduling rules, 61

neural networks, and genetic for 100 samples

Table 5- 1 Process plans for multiple-machine system 6 6

Table 5-2 A sample job batch 69

Table 5-3 Neural network configuration 77

Table 5-4 Simulation results of 9 rules and neural network selection 77

Table 5-5 Performance of scheduling rules and neural networks 7 8 List of Figures

Figure 3- 1 The general codguration of the intelligent scheduling system 25

Figure 3-2 Module for objective definition and data acquisition 26

Figure 3-3 Data preprocessing module 2 7

Figure 3-4 Module for scheduling pattern-recognition 29

Figure 3-5 A typical feedforward neural network 37

Figure 3-6 Crossover process 43

Figure 4- 1 A single-machine system 4 7

Figure 4-2 Linear representation of (pt + st) in SPT order 54

Figure 4-2 Linear representation of (pt + st) in FIFO order 55

Figure 5- 1 A 10-machine FMS system 64

Figure 5-2 Performance of rules for MFT 79

Figure 5-2 Performance of rules for MTD 79 Abstract

The interest in the use of artificial neural networks (ANNs) to sohre engineering optimization problems have been growing at a substantial pace in recent years. This mainly owes to ANNs' ability to mimic human intelligence and hence making ANNs a more robust technique in the decision making in a dynamic environment. The emphasis in this study is try to find an approach that is intelligent and flexible enough to handle the real- time scheduling requirements in a dynamic manufacturing environment, with shorter response time. An artificial neural network based pattern-recognition approach for real- time scheduling of production system is studied and a scheduling system which integrates artscial neural networks, dispatching rules, real-time simulation and genetic algorithms has been developed. In this system, art5cial neural networks, with their ability of learning and generalization, are used to make a predictive selection of a small set of candidate scheduling policies from a larger set of heuristics dynamically at a decision point without searching through the solution space exhaustively. Genetic algorithms are then applied to take this selected set of rules as part of the "seed" rules to generate a single &a1 "best" schedule, this schedule may be totally different from any of the root . The approach has been applied, with some variance, in two cases: (1) single-machine scheduling problem with sequence dependent setup times; (2) a multiple-machine scheduling problem. The simulation results in both cases for different performance measures demonstrated that the neural network based integrated system performed better than any dispatching rule alone. Artificial neural networks, when appropriately built, do possess promising potentials in solving real-time production scheduling problems at an intelligent level, which traditional scheduling theories and techniques have not been able to :provide. Chapter 1 Introduction

1.1 Overview

In a manufacturing system, the construction of a good schedule is often the primary means of achieving major system performance goals such as reducing costs and increasing productivity. Unfortunately, scheduling of manufacturing systems is a problem of well known complexity. In recent years, flexible manufacturing systems (FMS) have played an increasingly important role in manufacturing environments. Featured by flexibility and adaptability, FMS's have presented new challenges to the scheduling of manufacturing systems. They require that the scheduling system must operate in real-time with a short response time and be intelligent enough to handle the dynamic requirement changes and reflect both the flexibility and adaptability of these manufacturing systems. Over the years, many scheduling theories and techniques have been studied and developed. Among the traditional techniques, dispatching rules are used the most in scheduling manufacturing systems, and the selection of the most suitable rule for a given situation is mainly carried out by computer simulation. However, the performance of different scheduling rules are very dependent on the criteria selected as well as on the current state of the production system, and the selection of a suitable rule using computer simulation presents enormous practical difllculties to their real-time application because of the tremendous computational efforts involved and the time needed, especially in large problems and when there are many rules to search (Conway 1965, Baker 1974, Graves 1981, Blackstone 1982, Shaw 1990, Doctor 1993, Matruura 1993, etc.). Recently, the developments of artificial neural networks have provided some innovative alternatives to 3 attack the scheduling problems for dynamic manufacturing systems with certain new potential which traditional techniques have been unable to provide (Wu 1985, Arizono 1992, Chryssolouries 1992, Yih and Jones 1992, Rabelo 1992, 1993, etc.). The focus of this research is on the application of artificial neural networks and pattern recognition technique, as well as the implementation of the concept of a integrated system for the scheduling of dynamic manufacturing systems. Artificial neural networks are in essence the mapping of a set of inputs to a set of outputs based on certain mapping relationships encoded in its structure. They tend to capture in a black box the general relationships between inputs and outputs that are difficult or impossible to be represented by any analytical model (Chryssolouries et al.,

1992). Zn many researches on artificial neural networks and their application to various engineering optimization problems, artificial neural networks have shown several major advantages over traditional methods:

The ability to learn fiom past experience and generalize the knowledge learned, through closed-loop interaction with the system and its environment. This gives neural network the ability to derive good results even there exists certain level of noise in the input data. . Faster than simulation in terms of execution speed. Do not need exhaustive search to reach satisfactory result. Do not need to represent mathematically the general relationship between the

inputs and outputs. These unique features make artificial neural networks an appealing technique because they are intrinsically parallel and could in principle be used to explore solutions for large, complicated combinatorial engineering optimization problems. This discovery has raised great interests in the potential application of these techniques in the scheduling of dynamic manufacturing systems by developing intelligent, robust, real-time schedulers. 4

Among the many neural network paradigms, backpropagation neural networks have been successfblly applied in the fields of pattern recognition and classification, and have been considered for solving a variety of scheduling problems. They can be utilized to recognize patterns of different scheduling situations and make the decision of choosing a suitable scheduling policy (dispatching rule) based on the pattern recognized fiom a larger set of available such rules. Although the training of such networks could take longer time to finish, once a network is properly constructed and trained, it can provide robust performance and more globally optimized results.

1.2 Research Objective and Development

The objective of this research is to develop an effective, and "intelligent" system using pattern-recognition concepts to make fast and (near) optimal scheduling decision in a simulated real-time dynamic manufacturing environment. In this research, the application of a set of backpropagation neural networks in dynamic manufacturing scheduling problems are studied and an artificial neural network based pattern recognition scheduling system, which integrates simulation, dispatching rules, neural networks and/or genetic algorithms, has been developed. Two cases, a single machine job-shop scheduling problem with job sequence dependent setup times and a multiple machine scheduling problem, have been studied using the system. The results of this research show that the effective integration of artificial intelligence techniques such as neural networks and genetic algorithms and traditional scheduling approaches such as dispatching rules is able to provide the FMS a scheduling system with the required level of "intelligence" to respond to the dynamic change of the manufacturing environment in a timely and effective manner. 1.3 Thesis Organization

The rest of this thesis is composed of the following sections: chapter 2 presents a brief review of the history and the recent development of the research on traditional and the state-of-the-art theories and techniques applied in the field of manufacturing scheduling; chapter 3 describes in general the structures of the neural network based pattern-recognition scheduling systems proposed in this research, as well as the techniques used in development of such systems; chapter 4 and chapter 5 give detail description and analysis of the processes and the results for the single-machine and the multiple-machine scheduling problem, respectively; and chapter 6 gives the conclusion and the hture prospects of this research. Testing data and program lists are included in the appendixes. This research was carried out primarily on IBM PC compatible computer systems

(mainly 486@33MHz) and all the computer programs used in this research were developed in C andlor Visual Basic. Chapter 2 Literature Review

Scheduling is the allocation of resources over a specified time to perform a collection of tasks (Baker 1974). Job-shop scheduling is an NP-complete combinatorial and a classical problem, with an exponentially increasing solution space that is too large to search exhaustively. There exist (n!)"possible solutions for a scheduling problem with n jobs queued up at m machines (for example, there can be up to lo! = 3628800 different ways to schedule 10 jobs queued up at one work station). Researchers have long realized that the development of effective and representative models, heuristics, or algorithms is critical to solving the problems of this difficulty class. During the past three decades, a considerable number of approaches to solve job shop scheduling problems have been reported in the literature. These include mathematical programming formulation and analytical models, heuristics, dispatching rules and simulation, artificial intelligence (AT) based techniques which include knowledge-based (expert) systems, fuzzy logic, machine learning, genetic algorithms, and artificial neural networks, as well as the applications of pattern recognition techniques. A brief review on these methodologies and how they have been applied in solving production scheduling problems is presented in this chapter.

2.1 Mathematical Programming and Analytical Models

Mathematical Programming has been applied extensively to job shop scheduling problems (Baker 1974, French 1982). In fact, classical scheduling theory is concerned primarily with mathematical models that relate to the scheduling function, it is a 7 quantitative approach that translates decision-making goals into an explicit objective hction and decision-making restrictions into explicit constraints (Baker 1974). Scheduling problems have been formulated and solved using various operations research techniques such as integer programming, mixed integer programming, dynamic programming models, and branch and bound methods (Fry et al. 1987, Hutchinson et al. 1989, Raman et al. 1986, Selen and Hott 1986, Srikar and Gosh 1986, Van Vliet and Van Wassenhove 1989). Although many scheduling plans turned out better than those worked out by hand, due to the unique characteristics of scheduling problems in the manufacturing environment, the dif3iculties in formulating explicit constraints for the mathematical models which oRen resulted in lack of effective considerations of some important aspects such as the dynamic nature of production scheduling, capacity planning, etc. (Buxey 1989, MacCarthy and Liu 1993), and the limitations in developing generic solution techniques, these approaches were found applicable only to a limited class of problems. Moreover, the addition of complex constraints to the modeling of the real-time scheduling problems dramatically increases the requirements of computational resources and make them feasible only as a-priori approaches. Because of these reasons, most of these methods are rarely used outside the classroom (Buxey 1989). A number of techniques within this domain have been proposed and applied to overcome the deficiencies of traditional mathematical programming methods. Among them, Davis and Jones (1988, 1989) proposed a methodology, which is part of a closed- loop, real-time, two level hierarchical shop floor control system, based on the decomposition of mathematical programming problems which used both Benders-type (Benders 1989) and Dantzig/Wolfstype (Dantzig and Wolfe 1960) decompositions; Gershwin (1989) developed a mathematical programming fi-amework through a multi- layer hierarchical model for analysis of production planning and scheduling; Hutchinson et a1 (1989) used a branch and bound scheme with relaxed constraints and got better 8 performance results than a decompose scheme, but the new method required 4.5 times more computational resources. Nevertheless, because of the high complexity of scheduling problems in dynamic manuficturing environment and the lack of effective techniques necessary to solve the formulations of these problems, "heuristics or A1 techniques seem to be unavoidable" (MacCarthy and Liu 1993) to solve dynamic job-shop scheduling problems.

2.2 Heuristics, Dispatching Rules and Digital Computer Simulation

Due to their complexity, the dynamic job-shop and FMS scheduling problems have been approached consistently by the use of dispatching rules. A dispatching rule is the procedure or standard used to select the next job to be processed fiom a set of jobs awaiting service (Blackstone et a1 1982). Dispatching rules, which are simple in form yet have the ability to provide good solutions to complex problems in real time, can be easily applied to the job-shop scheduling problems. The terms dispatching rule, scheduling rule, sequencing rule, and heuristic are often used synonymously. The behavior and performance of a large number of dispatching rules have been studied extensively by using computer simulation techniques because of the complexity involved to evaluate them analytically. Many studies (Conway 1965, Baker 1974, 1984, Hershauer and Ebert 1975, Panwalker and Iskander 1977, Graves 1981, Blackstone et a1 1982, Shaw et al. 1990, Montazeri and Van Wassenhove 1990, Doctor et a1 1993, Matruura et a1 1993, etc.) have revealed that the relative performance of dispatching rules depends on the characteristics of the system, and no single dispatching strategy has been demonstrated to be consistently superior over other rules under a variety of shop configurations and operating conditions at all the time, that is, different rules have different eEect on given performance measures. 9 Most of these studies have been focused on which rule to choose in order to obtain the best scheduling results for a particular performance criterion, and sometimes conflicting results can be found in the literature based on the configurations of individual systems.

Dispatching rules can be divided into four classes (Blackstone et al, 1982). They are (1) rules invoking processing times, (2) rules invoking due dates, (3) simple rules invohring neither processing time nor due dates, and (4) combinational rules involving two or more of the previous classes.

Conway (1965) studied a large number of dispatching rules of different classes based on different performance criteria. Among the rules tested, processing time based rules in general performs better than due dates based rules and SPT dominates other priority rules for flow time and work-in-process inventory based performance measures. In concerns of performance criteria of job lateness and number of tardy jobs, due-date based rules seem to have some advantage over processing-time based rules, especially when due dates are established as some multiple of total processing time. Hershauer and Ebert (1975) tested a dozen different rules both simple and combinatorial rules, and concluded that slack-per-operation rule performs better than other due-date based rules. They also found that the combinatorial rules do not necessarily yield better results than single rules. However, Balakur and Steudel(1984) indicate in their research that combined rules invohhg slack time and shortest processing time are among the most promising ones and worth further research. Rochette and Sadowski (1976) have tested eight different rules and reached the conclusion that SPT was the best performer in all situations but optimizing mean tardiness with a flexible workforce, in which case earliest due date (EDD) rule outperforms SPT. Baker (1984) hds that there are crossovers among dispatching rules depending on due date tightness. In particular, EDD is better when due dates are at the loose end, and the SPT rule performs well at the tight end. 10

Doctor et a1 (1993) developed a heuristic-based algorithm based on a non-delay principle to sohre a machining and assembly job-shop scheduling problem. This research supports that the slack time rule perform better than SPT rule in an assembly job-shop environment. Matsuura et al. (1993) proposed an approach that switches sequencing to dispatching according to the manufacturing situation, to make optimal use of both sequencing and dispatching methods. Montazer and Van Wassenhove (1990) concluded, after comparing 14 dispatching rules using simulation in an FMS environment, that in general, SPT is good at minimi7;ing average waiting times and longest processing time (LPT) rule maximizes the machine utilization, however, "no single rule is the best on all performance measures and it is up to the user to choose one or more of the rules according to the performance measures prevailing in the particular application." General results are almost impossible to obtain because the performance of different scheduling rules depends heavily on the selected criteria as well as on the nature of the production system. However, "smart" selection among competing rules according to different perfonnance criteria can greatly reduce the number of possible schedules, and a good schedule plan is likely to be obtained by selecting an appropriate rule for certain situation at a certain point of time. The selection of the best dispatching rule for a given performance measure continues to be a very active area of research (Chandra and Talavage 199 1).

2.3 Artzjlcial Intelligence Based Methodologies

Flexible manufacturing systems (FMS) present a growing need for new approaches to handle the dynamic scheduling of such sophisticated manufacturing systems. Since early 1980ts, with the rapid advances of computer technologies, a series of popular new technologies have been applied to the FMS scheduling systems, such as knowledge-based (expert) systems, fizzy logic, machine learning, genetic algorithms, and artificial neural networks. All these new technologies fall under the broad category of Artificial Intelligence (AI). The common feature of A1 technologies is the acquisition of knowledge and the effectiveness of an AI technique is its ability of generalization based on the knowledge it has acquired. Several unique capabilities of AI make this technology particularly suitable for the development of intelligent scheduling systems for FMS's. The main advantages of AI techniques include (1) the ability to provide structured, quantitative and qualitative knowledge and filly incorporate this knowledge into decision-making process and capture complex relationships in relined data structures; (2) capability of generating heuristics which are sigdcantly more complex than the simple dispatching rules; (3) the ability of reasoning that enables the scheduler to select the best heuristic based on the range of information on the entire system including the current jobs, expected new jobs, status of machines and material transporters, and status of inventory and personnel, and thus make more adaptive and predictive decision, which is critical to the successll development and implementation of an FMS scheduling system There are also some drawbacks in the AI-based systems. For instance, (I) A1 systems can be time consuming to build, verifl, and maintain; (2) since AI systems narrow the search space and generate only feasible (not necessarily the best) solutions, sometimes it is hard to tell how close the solution is to the optimal solution; (3) AI systems are environment related, they are tied directly to the environment they were built to handle, and this makes it difEicult to build generic commercial scheduling systems based on A1 techniques. 12

Nevertheless, with new AI techniques evolving, more and more research have demonstrated that artificial intelligence technologies have shown good promises in solving dynamic manufacturing scheduling problems.

2.3.1 Knowledge-based Expert Svstems

Knowledge-based expert systems have been an important form of use of A1 techniques as a means of resolving job-shop scheduling problems. The central part of a knowledge-based system is a which is a collection and extraction of the information (rules) about the system, usually generated fiom simulation results or fiom the experiences of the experts. Whenever a decision-making point is reached, the system searches the database to fhd the condition that matches the pattern of the current situation and make proper decisions following the rules based on the knowledge acquired. Many research applying knowledge-based approach to job-shop scheduling problems have been conducted. The ISIS - Intelligent Scheduling and (Fox and Smith 1984) was among the fist knowledge-based expert systems designed for job-shop scheduling problems. It uses a knowledge-based constraint-directed heuristic search approach to construct schedules by using constraint satisfaction as an index to reduce the search space and direct which way the solution search should go. Sauve and Collinot (1987) used object oriented language to represent knowledge concerning constraints and flexibility factors in an FMS scheduling problem. The system includes two parts: one off-line daily scheduling and an on-line control of production disturbances. Kusiak (1989) described the KBSS - Knowledge-based Scheduling System for an automated manufacturing environment. The system focuses on the integration of optimization and expert system approaches. Kusiak (1990) also proposed another 13 knowledge-based expert planning system which utilizes pattern matching to hdproblem patterns. Wu (1987) designs a multiple-pass knowledge-based expert cell control system integrated with simulation procedures. Evaluation of the dispatching rules at short time intervals, combined with a continual alternation on different dispatching rules makes the system more responsive and adaptive to the environment. However, the system is still unable to learn fiom previous outcomes.

Chryssolouris et al. (1989) proposed a structured decision-making approach called MAnufacturing DEcision MAking (MADEMA). It is a multi-criteria decision-making scheme focusing on the intent of controlling the degree of optimization being incorporated into the decision-making process by flexible consideration of a variety of criteria and the decision horizon. It is concluded (Chryssolouris et al. 1989) that the accuracy of the estimated values of the criteria determines the performance of this system Recently, induction has been utilized to facilitate the scheduling knowledge acquisition process which is considered as "the most time-consuming and dif3icult step in the development of an expert scheduling system" (Yih 1992). Shaw (1989) integrates pattern-directed inferences and heuristic search into a knowledge-based system to develop an FlMS scheduling system based on three A1 techniques. (1) The pattern-directed inference technique used to capture the dynamic features of FMS environment. (2) The nonlinear planning technique to construct schedules and coordinate resources. (3) The inductive learning to generate the pattern-directed heuristics. The major advantage of this method is that it is able to achieve adaptive decision making. Knowledge-based expert systems have provided an environment where ill structured speciiic knowledge and well structured generic knowledge fiom scheduling theory can be combined towards the development of a good schedule. However, several technical limitations, such as complex knowledge acquisition processes that make it hard 14

for the database to include all possible situations for a system, weakness in learning, and the slow execution speed and the requirement for great computational efforts especially

when a large system or complex rules are involved, make these systems impractical for real-time implementation.

Fuzzy logic is based on Fuzzy which was defined originally by LoRi A. Zadeh (1965). A fizzy set is an extension of a classical yes and no, or crisp set. In contrast to crisp sets which allow fidl membership or non membership (i.e. each member belongs to it either 0% or loo%, nothing between), fizzy sets allow partial membership (i.e. with values ranging from 0% to 100%) for their individual elements. In other words, an element may belong to a fizzy set to some degree. Fuzzy systems are considered a type of

knowledge-based expert systems since they too store knowledge as rules - fkzy rules. However, they are aimed at the formalization of models of reasoning that are approximate rather than exact (Kosko, 1992), which provide a more effective means of capturing the approximate, inexact nature of the real world than the conventional knowledge-based

expert system. Fuzzy set theory has been utilized and shown good practical promises in many

areas ranging from simple consumer products to sophisticated system controls. For example, fizzy logic has been used in the control of engine idling problem (Rockwell

International Corp.), in the design of auto focusing cameras (Canon Inc., Sanyo Fisher USA Corp.), and in various NASA projects, etc. Fuzzy set theory can be usefil in modeling and solving some job-shop scheduling problems when the assumption of precise data is not valid. One such example is the scheduling problems with uncertain processing times (Tsujimura et al. 1993). These uncertain processing times can be represented by a 15 fuzzy number which is described by using the concept of an interval of confidence. These approaches usually are integrated with other methodologies (e.g. search procedures). Fuzzy Logic concept has been used to develop a hybrid approach to help setup families and balance machine loads of the printed circuit assembly line at Hewlett-Packard (Krucky 1994). Grabot and Geneste (1994) develop a "hq"approach to use fuzzy logic to generate "parameterized" aggregate dispatching rules which can balance and compromise between the satisfaction of multi-criteria, and adjust the effects of the single-purposed rules they are composed of with the change of the production state. Fuzzy Logic is still a new field of technology at its stage of development, and there are still needs for more research with its theory and applications.

2.3.3 Machine Learning

The ability to learn and reason is very important for an intelligent system. Machine learning is a rapid growing research area for studying methods of developing A1 systems that are capable of learning (Shaw et al. 1992). Most machine learning methods are based on the concept of generalization, that is to derive the general characteristics of a class by learning the behavior of individual elements of that class. Several research works (Shaw 1989, Park et al. 1989, Shaw and Winston 1989, Shaw et al. 1992) have shown that the effective incorporation of machine learning capabilities into intelligent scheduling systems has the potential of improving system performance significantly in several perspectives, such as to speed up the search process and reduce the computational complexity by accumulating heuristics, to generalize the planning process, and to enhance the rule-based inference obtained by automating the acquisition and refinement of knowledge-based rules. More work still need to be done to apply machine learning techniques to intelligent scheduling. 2.3.4 Genetic Algorithms

Invented by John Holland two decades ago (Holland 1975) and refined in the late 1980's and 1990ts,genetic algorithms represent a class of adaptive and robust optimization search techniques that have been utilized to approach optimal solutions for various dynamic engineering problems including job-shop scheduling problems during the past decade. Genetic algorithms are an imitation of the biological process of natural selection and mutation based on Darwin's theory of survival of the fittest. Operations that are similar to those found in natural genetics, such as reproduction, crossover, and mutation, are used in genetic algorithms to generate a better generation of solution candidates and thus narrow down the search spaces (Karr 1991), and an objective hction is employed to guide the search directions to lead to optimal solutions. A genetic algorithm searches a solution by generating a population of search points randomly, evaluating each point independently, and then combining and extracting good qualities fiom existing points to form a new generation of population of improved points (Goldberg 1989). Since genetic algorithms search from a population of possible answers simultaneously rather than a single point at a time, they can potentially provide a more global search than most traditional search methods do, which can be an advantage in sohgcomplex optimization problems such as dynamic job-shop scheduling problems. A simple yet typical genetic algorithm oRen performs population improvements in the following steps: (1) Initialization of initial population - the initial values of the first generation of solution population can be randomly assigned. However, the closer the initial population to the optimal solution, the better and more efficient the algorithm performs. 17

(2) Evaluation of the fitness of each member of the entire population - each chromosome or possible solution is given a value which is the value of an objective hction of how well that solution solves the problem, and the fitness value indicates the fittest solution in the current population.

(3) Application of selection pressure - every member of the entire population is ranked based on its fitness value with the fittest chromosome at the top. Higher ranking individual are given a higher mating rate to ensure higher possibility of producing better or equally good offspring.

(4) Reproduction and variation - this process produces next generation of the population fiom the current population by conducting a series of random trials in which basic building blocks are exchanged a number of times, proportional to the values of the objective fimctions, fiom the parent chromosomes to offspring chromosomes. Two genetic operators that simulate the processes occur during biological reproduction are involved in reproduction phase. The first one is crossover which exchanges the genetic materials and passes characteristics from the parents to the offspring, and the performance of this operator largely decides the quality of the population of the next generation; the other one is mutation which randomly changes the chromosomes at a low rate, with the intention of forcing chromosomes to explore more solution space and preventing evolutionary dead ends.

(5) Repeat 2-4 if necessary - until the objective function has reached an acceptable value. Usually, good results can be achieved within a few generations, and the convergence is even faster when at least one component of the initial population has a "high quality". The effective representation of the problem to be solved plays a key role in successfill implementation of genetic algorithms. Genetic algorithms have been applied in 18 job-shop scheduling problems using various representation schemes. Starkweather et al. (1992) implemented a genetic algorithm solution to a bi-criteria resource allocation problem in a real production facility using enhanced edge-recombination operators -- a type of indirect ("blind") symbolic crossover operators developed by Syswerda (1991). This type of operators emphasizes information of the relat~eorder of the elements in the permutation. A single evaluation hction that took into consideration the scale factors of average inventory and mean time of the orders was developed to evaluate the new population. It is concluded fiom this research that genetic algorithms can be used to develop effective schedulers with the ability to consider plant dynamics (Starkweather et al. 1992). Binary encoding representations are found more suitable for conventional genetic algorithm operators than symbolic representations. Syswerda (1991) used a binary coding for the Traveling Salesman Problem (TSP) in his study of various crossover operators and have got promising results. Cleveland and Smith (1989) used several operators in a genetic algorithm with a binary coding representing the time of each job released in scheduling the releases of jobs into an automated manufacturing facility. After comparing with three dispatching rules (SPT, EDD and mSLACK), it has been concluded that genetic algorithm approach is more effective. Nakano and Yamada (1991) used a binary-coded genetic algorithm with a "harmonization" procedure to handle illegal chromosomes by removing both local inconsistency within each machine and global inconsistency between different machines. The results are comparable to that of the Branch & Bound procedures for certain job-shop scheduling problems. DeJong and Spear (1991) used a binary encoding methodology that mapped sequencing problems and their constraints to a Boolean Satisfiability Problem (SAT) and a "partial payoff scheme" which ranks partially satisfied expressions in solving some simple problems. The approach and its integration with other genetic methodologies look very promising (Goldberg et al. 1991). 19

While symbolic and binary representations can simpllfl the problem to accommodate simpler genetic operators, direct representations can reveal the nature of the problem in a more thorough way. Knowledge-based and heuristic crossover operators which use direct representations have been used recently in job-shop scheduling. Bruns

(1993) applied an expanded direct representation incorporating all available information in a complex scheduling problem and claimed that the result was better than that of a blind recombination operator. Bagchi et al. (1991) compared a blind recombination operator with local search routine against a more direct representation that included process plan information and concluded that the more direct the representations, the more sophisticated genetic operators are required, but the refinement level of the schedule builder is reduced. Fang et al. (1993) developed a genetic scheduler on a benchmark job-shop scheduling problem with adaptive mutation and crossover, evaluating parts of the chromosome with lower and higher variances. The result obtained was better than previous genetic methods. Scheduling heuristics can be used to select members of the initial population. When a good

"seed" schedule, which belongs to the initial population, is found, genetic algorithms then become as good as (ifnot better than) the annealing-type algorithms (Fleury 1993). The integration with other search procedures has enhanced GAS' capabilities to overcome deceptive problems (Parunak 1991). It is more and more recognized that genetic algorithms have a close relationship with artificial neural networks, and each will complement the other if these techniques can be integrated effectively in one system

2.3.5 Artificial Neural Networks

Because of their potential ability to mimic human intelligence, artificial neural networks (ANNs) and their applications have been a popular and active research area over the past decade. ANNs have been studied with the hope that artificial intelligence systems 20 will eventually perform some of the tasks that now require human intelligence and compensate some of the human weakness in doing such jobs. The essence of all ANNs is the mapping of a set of inputs (input vector) to a set of outputs (output vector) according to certain mapping relationship encoded in its structure. ANNs tend to capture in a black box the general relationships between inputs and outputs that are difficult or impossible to be represented by any analytical model (Chryssolouries et al., 1992), and their ability to learn and remember distinguishes them from other techniques. ANNs have been studied and applied in a broad range of applications which include pattern-recognition and classification, prediction, optimization, and etc. Many research results have shown that ANNs do have the potential to provide a high level of performance in getting satisfactory results, in a timely and efficient way, in solving some of the NP-complete problems involving chains of decisions, such as job-shop scheduling problems. Artificial neural networks are composed of the basic computational elements (called neuron) arranged according to certain paradigm and linked together by the interconnections represented by the values of a set of free variables (called weights). The network then is trained based on the chosen training algorithm so that a set of inputs will produce the desired set of outputs. During the training process, the weights are adjusted to better represent the relationship between the input vector and output vector. When the training is done according to certain criteria, the training results of a network is represented by the final values of the weights. Once a network is properly formed and trained, it has the ability to generalize the knowledge it has learned through the training process and when a similar new case is encountered, it can derive the appropriate result fiom the experience it learned. A properly trained ANN'S response can be insensitive to minor noise in its input, and this ability makes ANNs vital to pattern-recognition in a real- world environment and thus makes ANNs a preferred technique for pattern-recognition where conventional computation techniques do unsatisfactorily. Although the fimdamental concept of ANNs remains similar, many network paradigms have been developed with merent network interconnections and mapping algorithms. They have Werent advantages and limitations and are preferred in different applications. Among the paradigms the ones used most fiequently in job-shop scheduling problems are Hopfield's network and feed-forward (backpropagation) networks. The Hopfield network have been primarily used in solving optimization problems. Hopfield and Tank (1985) mapped the traveling salesman problem (TSP), as an example, into the network and achieved very encouraging results. Foo and Takefbji (1988a, 1988b) introduced an artificial neural based on a stochastic Hopfield networks to solve job-shop scheduling problems with ked number of jobs and machines. They concluded that "this stochastic approach produces near optimal solutions", however, optimal solutions were not guaranteed. Zhou et al. (1991) modified the approach to apply it to a more complex scheduling environment. A simpler network structure and better results have been reported with the use of a linear cost fimction. Arizono et al. (1992) used a stochastic neural network formulated based on the Gaussian machine model in a scheduling problem to minimize total actual flow time. They emphasized the importance of the definition of network and energy hction to the performance of such networks, and have obtained very interesting results. Other relevant works have been done by Lo and Bavarian (1991), Vaithyanathan et al. (1992). Nevertheless, it has been indicated in the researches that the approach was not quite suitable for real-time, large scale problems because of its computational inefficiency and the easiness of generating local optima based on the initial state of the network. While Hopfield neural network has been used in optimization problems, backpropagation neural networks, which are a mapping network composed of multi-layer perceptrons that can provides more fieedom to the structure of the network, is found more suitable for real-time, dynamic problems (Wu, 1987). Backpropagation networks are 22 a technique more preferred in pattern-recognition related approaches. They have been used to recognize patterns in scheduling situations and make the decision of choosing a suitable scheduling policies (rules) based on the pattern recognized fiom a larger set of available such rules. Chryssolouries et al. (1990) studied the use of artificial neural networks together with simulation to determine operational policies for manufacturing systems by identwg the relative importance of the operation decision making criteria for given performance goals. They concluded that the proposed neural network integrated procedure is better suited to complex systems than conventional methods. Pierreval(1992) built a dispatching rule selector using a simple basic backpropagation neural network for a simplified job shop. Good results have been reported, although no dynamic setup time is involved. Rabelo et al. (1992, 1993) utilized modular backpropagation neural networks in candidate dispatching rule selection, and have concluded that ANNs have been proved to be a robust tool in selection of appropriate rules. Various good results have been achieved. Other relevant works include those of Wu (1989) and Yih and Jones (1992). Almost all the research imply that the appropriate representation of the input data is crucial to the successll training of a neural network. Recent research interests have been growing in the integration of neural networks with other techniques to form a integrated system which, once properly built, will provide the environment that each technique complements the other in a way that the system can generate better solution than each technique applied alone.

2.4 Pattern Recognition and Neural Networks

Pattern recognition theory and practice is the research area that studies the operation and design of systems that have the ability to class@ a set of input data into one of a number of categories or classes, often instantaneously without conscious thought. The area of pattern recognition has evolved as a setting for studying general statistical-based decision methods particularly as they relate to implementation as information-processing algorithms which are generally to be implemented on a computer to provide automatic recognition of the patterns without human intervention. Traditionally, pattern recognition techniques include statistical pattern recognition which encloses subdisciplines like discriminate analysis, feature extraction, error estimation and cluster analysis, and syntactical pattern recognition which encloses grammatical inference and parsing. Pattern recognition theory and techniques have been used in many areas. Typical areas of pattern recognition application include text recognition, image recognition, voice recognition, medical diagnosis and analysis, economical trends and patterns, and industrial control systems such as quality prediction and process optimization, and etc. Pattern recognition problems can be divided into two categories - supervised pattern recognition and unsupervised pattern recognition (clustering) based on whether or not the individual classes of input data patterns are already known. In supervised pattern recognition, a portion of known patterns is extracted as the training set and used to derive a classification algorithm Then the algorithm is tested and evaluated with the remaining known patterns, the testing output dictates appropriate modifications to the algorithm until a satisfactory performance is achieved. The algorithm then can be used to classfi new patterns. In unsupervised pattern recognition (clustering) problems, the classes of all patterns, even the number of the classes, are not available, the attempts are to find classes of patterns with like properties. In either case, the learning ability of the algorithms and systems plays a vital role to their successhl implementation. Unfortunately, however, this ability of learning is the weakness of most of the traditional approaches. The recent developments in neural networks have demonstrated that neural networks are good at pattern recognition, classification and evaluation. The extraordinary ability of a neural network to learn from experience and identifL a set of previously learned 24

data, even in the presence of noise and distortion in the input pattern, has made it an excellent candidate for any application requiring pattern recognition. The utilization of neural networks as a tool for pattern recognition, a substantial departure fiom traditional approaches, have accelerated the growth and broadened the range of use of pattern recognition applications. In such a pattern recognition system, the neural network is used to learn patterns and relationships in data, and identifL patterns fiom what it learned via the training stage. No mathematical algorithm is derived throughout the training process. An advantage of a neural network is that if the process being analyzed changes over time, only new examples need to be collected and train again. The networks used for pattern recognition are usually large and redundant, have many parameters, are difficult to train, do not need information on densities or metrics and yield robust results over a wide range of pattern recognition problems. Pattern recognition systems incorporating neural networks have been found successhl in various applications fiom cancer diagnosis, imaging recognition and process to manufacturing control applications. Similar approaches are also used in manufacturing scheduling problems, and results obtained are generally better than using scheduling rules alone in a dynamic job-shop environment (Rabelo et al. 1991, 1992, 1993, 1995). Certain level of intelligence in terms of making the correct and timely scheduling decision has been shown in such systems. Chapter 3 System Architecture

3.1 System Overview

The primary objective of the intelligent pattern recognition scheduling system developed in this research is to make a proper decision in selecting a proper dispatching rule, or derive a new rule if there is no proper rule available, for the given situation. The

decision will be made at each scheduling decision making point, when a machine needs to decide which one of the candidate jobs awaiting in the queue would be the next one to process, based on the current state of the system and the given performance criterion. This task is accomplished through the performance of various hctions in the following three main modules in the system: an objective/performance dehition and data acquisition module, a data preprocessing module, and a scheduling pattern recognition decision makingloptimization module (Figure 3- 1). The purpose and hctionality of these modules and their member functions are described in the subsequent sections of this chapter.

Manufacturing Execution Environment Objective Defmition Decision Making, of Decision Data Preprocessing 4 * Data Acquisition Optimization

Figure 3- 1. The general configuration of the intelligent scheduling system 3.1.1. Assessment Module

Two hctions are performed in this module (Figure 3-2): objective/performance criteria definition hction and system attributes data acquisition function. The first hction defines the problem for the system to solve by formulating the objectiveslperformance measures as well as the constraints of the system fiom the inputs provided by the users based on the current state of the system and the planning and management needs as well as fiom the feedback of the execution of the system decision that needs re-evaluation. The output of this fhction determines the focus of the system performance and thus determines the characteristics of the system. The data acquisition fhction gathers analog data fiom the physical world and converts them into a set of measured data in digital format suitable for computer processing. These data define the attributes of the systems, which describe the system codguration through the information such as the current queue size, processing time, due date related information, setup times, and etc. These two functions are performed off-line as a preliminary data preparation process.

- Request for From Ajustmentlre-evaluation Userhfanagement Objectivelperformance - - Definition m ) To Data~;ep;cessing

From Database Manufacturing ,'s Data Acquisition Environment Request for Collecting New Data

Figure 3-2. Module for objective definition and data acquisition In this module, performance measures are selected based on their functionality and the utilization in industrial applications; feasible dispatching rules as candidate scheduling alternatives are selected from a larger number of available rules based on their characteristics and general perfbrmance in industrial scheduling problems. The definitions of selected performance measures and dispatching rules are described in section 3.2. The data describing the system attributes are acquired from the manufacturing environment, which are determined by the process plans and batch requirements. In this research, the batch job/system attributes data are generated from computer simulations by simulating the real-time job-shop environment with various process plans. A job database is established to store the data obtained from both functions, and these data will be mher processed into the format suitable for the training of the pattern recognition networks.

3.1.2. Data Preprocessing Module

In this module, the measured data obtained in the objective dehition and data acquisition module are processed and prepared as inputs for the scheduling decision making module, via the performance of several data processing functions (Figure 3-3).

Request for Adjustment 4 Pattern Vector 1 Feature - From Job Extraction 1 Data Pattern Vector 2 To Scheduling -, Combination 8z - Normalization Decision Module Rule Performauce + Evaluation Pattern Vector n

Figure 3-3. Data preprocessing module 28

This module consists of feature extraction function, dispatching rule performance evaluation function, and data combinatiodnormalization and decision support data formulation function. The raw data that describe the attributes and characteristics of the current state of the system are fist processed by the feature extraction hction and the data are grouped into a set of extracted characteristic features with different patterns. The feature extraction process is adjusted and different character patterns of the system are extracted according to different system requirements as well as the feedback fiom the decision making process. The dispatching rule performance evaluation function evaluates and ranks the relative performance of selected dispatching rules for given system configuration and performance criteria through computer simulation. These performance and rankings of rules are later used as criteria for rule selection among the candidate rules. The data combination and normalizatioddecision support data formulation hction combines the extracted features of data and the performance of the rules together in a way that it converts a set of discrete data into a mathematical pattern vector which appears as a point in the designated pattern space, for the given system specscation and performance criterion. These patterns are then formulated and normalized into the formats suitable for the training of the neural networks in the scheduling pattern recognitioddecision making module.

3.1.3. Pattern Recognition/Decision- MakinnlOptimization Module

This module makes the final decision in selecting or deriving the scheduling policy (rule) that is suitable for the designated manufacturing environment in a real-time mode. The decision is made based on the data, which describe the patterns of the current state of the system and the imposed performance criterion, extracted and formulated in the data preprocessing module. This decision making process is accomplished through an off-line 29 training process and an on-line (real-time) rule selection and/or optimization process. This module includes the performance of three functions: a neural network dispatching rule selection function, a selected rule(s) evaluation function, and/or an optional genetic optimization function (Figure 3-4).

+ Request for Retraining Neural Rule Selector I

Pattern Vectors from Data Execution Preprocessing of Decision Module

I Optional Genetic Request for Adjustment Optimizer in Previous Module(s)

Figure 3-4. Module for scheduling pattern recognitionldecision malungloptimization

The dispatching rule selection function uses the data prepared in the data preprocessing module as input and selects one or a small number of rules that have best performance for given performance criterion from a larger collection of available candidate rules as the outputs. This rule selection process is vital to the successfd implementation of the scheduling system because (1) the rule or the small number of candidate rules selected will be used either as the "seed" rules for further optimization, or as the final scheduling policy, depending on given scenario. In either case the results will very much decide the quality and performance of the whole system; (2) the efficiency and adaptivity of this process will determine the response time of the system, thus its usability for real-time scheduling. This requires that the rule selection process must be able to not only recognize 3 0 the best rule(s) that fit the given system configuration and performance criterion, but also easily and quickly adapt to the changes of the state of the system Neural networks are used to carry out this dispatching rule selection task because of their real-time feature and the learning and generalization capabilities. Neural networks have the ability to learn fiom experience and generalize the knowledge they have learned, and they can identltL previously learned patterns even with the presence of certain level of noise and distortion in the input data. These unique capabilities of neural networks have made them a robust and adaptive tool for rule selection process. Different expert neural network rule selector with different network configuration is developed to optimize different object (performance measure). The neural network learns the knowledge about the system through an off-line training process. A group of data formulated in the data preprocessing module, which covers the patterns reflecting different state of the system and a wide range of performance measures and dispatching rules, is used in the training process as the source of the information for each ind~dualneural network to extract and generalize the knowledge of the given system and develop the ability to rank the performance of candidate rules for each individual performance measure of interest.. After the network is well trained, it is able to be used on-line to select quickly the top ranking rule(s) for the given system status and desired performance criterion based on the pattern of the current state of the system and the knowledge learned fiom the training process. In this research, backpropagation neural networks are used in the rule selection function because they are well established and highly effective, especially in cases when the inputloutput relationships are not linear, or high-order correlation among the input variables are invoked. The details of backpropagation paradigm is described in section 3.3 of this chapter. 3 1

After the desired rules are selected by the rule selection function, these rules are evaluated by the rule evaluation function. Real-time simulation is used in this evaluation process. In this process, the rules selected are analyzed to determine the impact of each rule to the perfbrmance of the system. Depending on the requirement for the system performance and the rule selection and results, this process will determine whether a re- training of the neural network and additional data formulation is required, as well as if finther optimization of the rule is needed. The rules selected will then be passed to the genetic rule optimization function or be used as the final scheduling decision. Sometimes, the rule(s) selected for the system status need to be fkther optimized in order to meet certain system objective, and this optimization is accomplished by integrating the good features and eliminating the bad features of the best performing rules through the genetic optimization bction. This genetic optimizer uses the rules selected by the neural network as the "seed and through the application of genetic algorithms to generate the most suitable rule as the final scheduling policy for the given system configuration and performance measure. To use the "seed" rules as some of the initial population for the genetic algorithms can reduce the time and effort for the genetic algorithms to find the ha1 optimal scheduling policy. This final policy can be one of the rules selected by the neural rule selector, or it can be a totally new one derived by the process, depending on the current state of the system and the imposed performance criterion. The genetic algorithms used in this research are described in section 3.4 and in chapter 4. This intelligent pattern recognition scheduling framework has been applied, with some variation, in two case studies in this research: a single-machine sequencing problem and a multiple-machine scheduling problem These two cases are discussed in detail in Chapter 4 and chapter 5 respectively. 3.2 Dispatching Rules and Performance Measures

3.2.1 Definition of Symbols and Terms

Various symbols and terms are used for the definition of dispatching rules and performance measures which are used throughout this thesis. These symbols and terms are defined as follows:

n Job queue size

j Job number in queue (i= 1, 2, .. . , n)

pj Processing time

Sj Set-up time

aj Arrival date

'j 'j Due date

Cj Current time (ready time)

The amount of time a job spends in the system Lateness(L): Lj=cj+pj+sj-dj

The amount of time by which the completion time of a job exceeds its due date. Tardiness (T) : T, = max (L, ,0)

The lateness of a job if it fails to meet its due date, or zero otherwise. d.-c. Critical Ratio (CR) : CRj = -L--L Pj +sj

Slack Time (SLACK): SLACKj = dj -(cj +pj+sj)

Static Slack (SSLA CK) : SSLACK = dj - (aj + pj +sj)

dj -(cj +pj+sj) Job Slack Ratio(SLACK7RT) : SLACK / RTj = dj - cj

3.2.2 Definitions of Selected Dispatching Rules

A dispatching rule is the procedure or policy to select the next job to be processed fiom a set of jobs awaiting service. Scheduling rules can be very simple or extremely complicated according to their specific attributes. Their performance depends heavily on the purpose of the system or the performance criterion chosen to measure them as well as the configuration of the manufacturing system, and the selection of a particular rule in general is not that obvious. In this study, selected rules are incorporated into the neural networks as candidates for rule selection. The dispatching rules used in this research are a representative, but not an exhaustive, selection of the rules available and typically utilized in the scheduling of dynamic manufacturing systems. These rules are selected based on their characteristics and general performance in industrial scheduling problems under different circumstances:

SPT (Shortest Processing Time): The job that has the shortest processing time (i.e. min{pj)) will be processed

first. LPT (Longest Processing Time): 34 The job that has the longest processing time (i.e. mm{pj))will be processed

first. FIFO (First In First Out):

The job that arrives in the queue first will be processed first. LIFO (Last In First Out):

The job that arrives in the queue last will be processed first. SST (Shortest Set-up Time): The job that has the shortest set-up time will be processed first. LST (Longest Set-up Time):

The job that has the longest set-up time will be processed first. SPST (Shortest Processing and Set-up Time):

The job that has the shortest processing time and set-up time will be processed

first. LPST (Longest Processing and Set-up Time):

The job that has the longest processing time and set-up time will be processed

first. EDD (Earliest Due Date):

The job that has the earliest due date will be processed first. LDD (Latest Due Date):

The job that has the latest due date will be processed first. mSLACK (Minimum Slack Time): The job that has the least amount of slack time will be processed first. MSLACK (Maximum Slack Time): The job that has the largest amount of slack time will be processed first. CR (Critical Ratio): The job that has the smallest critical ratio value will be processed first. SSLACK (Static Slack Time):

The job that has the least amount of static slack time will be processed first. SLACKmT (Job Slack Ratio): The job that has the smallest ratio of slack time and remaining operation time

will be processed first. LWR (Least Work Remaining):

The job that neds the least amount of work to finish will be processed first. LNOR (Least Number of Operations Remaining): The job that has the least number of remaining operations will be processed first.

3.2.3 Dehition of Selected Performance Measures

Performance measures are criteria defined to evaluate the performance of a manufacturing scheduling system. Different performance measure examines different aspect of the system performance. The dispatching rules can perform differently according to different performance measures. Based on their functionality, as well as the importance and frequency of utilization in the real-time measurement of the scheduling systems of job- shop manufacturing systems, the following seven performance measures are used in this research in order to measure the performance of the system under different circumstances and the benefits gained:

Maximum Flowtime (MaxlT): Fma = max{Fj) Ilj5n The purpose is to minimize the system's maximum flowtime.

Mean Flowtime (MlT):

The purpose is to minimize the system's mean flowtime. Maximum Tardiness (MaxTD): = ) T,, max{T,K jSn

The purpose is to minimize the system's maximum tardiness.

Mean Tardiness (MTD): T=Ac~ n j=1 The purpose is to minimize the system's mean tardiness.

n C[(pj+sjXn+l-j)l j =1 Work-in-process Inventory (WIP): W = n

The purpose is to minimize the system's work-in-process inventory level.

Cpi Machine Utilization (MACH): M= j=1

The purpose is to maximize the system's machine utilization rate. n (THRU): H=

j=l The purpose is to maximize the whole system's throughput.

Among the above performance measures, all of the seven are used in the single- machine scheduling problem, and the performance measures Mean Flowtime (MFT) and Mean Tardiness (MTD) are used in the multiple-machine scheduling problem.

3.3 Backpropagation Paradigm

Backpropagation is a scheme by which a multi-layered feedforward mapping network is trained to become a pattern matching engine. Backpropagation neural networks are a technique more preferred in pattern-recognition 3 7 related approaches. The backpropagation neural networks used in this research are based on the paradigm developed by Rumelhart et al (1986). A typical multi-layer feedforward neural network usually consists of an input layer, at least one hidden layer, and an output layer. Each layer is composed of simple process elements called neurons, and the neurons of adjacent layers are connected and the strength of the connections are referred to as weights, whose values determine the configuration and characteristics of the network.

Enput output Vector Vector

Input Layer Hidden Layer Output Layer

Figure 3-5. A typical feedformud neural network

The network is trained by presenting it with pairs of input vector (in this research, the inputs are the information concerning the characteristics of the system, such as job process time, due dates, etc.) and desired output - target vector (in this research, the outputs are the performance ranking of selected dispatching rules for given performance measure). The objective of the training process is to adjua the weight matrices so that the network will eventually produce the matching output pattern when given the corresponding input pattern of that pair. The neural network training process can be viewed as a minimization process in which the error between the actual output and target is minimized. The error is computed and the error signals propagate back through the network during the training process, and the weights are altered based on these error signals so that the output is closer to the target. The network learns the internal representation through a series of iterations of such process, until the error falls below the desired level. An error function is used to measure the error between the actual output 0 and the target vector Tat the output layer:

where p represents the inputloutput pattern, i is the index indicating a specific neuron in the output layer, I is the index indicating the total number of layers, ti is the target and Oil is the actual output for the ith output . The backpropagation training algorithm uses gradient descent method which calculates the partial derivative of the error with respect to each weight ( Aw ) and changes the weight in the direction that will minimize the error (the negative of its derivative). The weight adjustment factor Aw for weight between the jth unit of layer m-I and the ith unit of layer m can be expressed as:

in which the combined weighed input (net input) to the zth unit of layer rn ( 0, is the bias to this unit): netim= C w~O~~-~+ 0, .i therefore, &et. W - - Ojm-1 ijm The activation hction that calculates the output of the same unit

and

The variable S is defined as the partial derivative of the error to the net input, and is calculated by propagating the error back through the network starting fiom the output layer.

For the output layer 1:

and 6,= (ti - 0,)Oil (1 + Oil )

For the subsequent layers:

The weight adjustment factor then is calculated as Aw* = pSimOj,,,-, where p is a constant called learning rate. The weight adjustment fhction then can be expressed as: w,(t) = w,(t - 1) + Aw,(t)

Backpropagation networks are usually slow to train, and the training process using gradient descent could, in certain cases, cause weights to oscillate rather than converge smoothly. To improve the training speed and convergence rate, a momentum is often applied to make the current weight adjustment a function of the previous weight change.

The momentum is added to the weight adjustment function as follows

where t represents the current state and /3 is the weight factor for the momentum called momentum factor. There is no unique standard for the correct values for the learning rate

,u and the momentum factor P, they usually use the values between 0 and 1. Before the training starts, the weights are initialized to small random numbers, usually within the range of H.2. The weights are then adjusted through a sequence of iterations of error propagating process by passing through the entire training set (epoch). The error measure used in this research is the RMS (Root Mean Square) error for each epoch

RMS = \I (l..', where No is the number of output units, N, is the number of patterns in each epoch.

3.4 Genetic Algorithms as Schedule Optimizers

In certain cases, after the evaluation of the scheduling rules selected fiom the available rules by the neural rule selector, it is decided that fiuther optimization of the scheduling policy is required in order to meet the system requirements. The genetic optimizer incorporating genetic algorithms is applied to optimize the ha1 scheduling policy. Genetic algorithms are chosen in the schedule optimization process because of their unique capacity of seeking superior solutions. Genetic algorithms are an imitation of the biological process of natural selection that only the fittest individuals in a population can survive. They represent a class of 41 optimization search techniques that are adaptive, robust, and general. The optimization process using genetic algorithms involves the search in a high-dimensional spaces for superior solution, although the algorithms operate without the knowledge of the search space. Genetic algorithms differ from conventional optimization techniques in a way that they seek continuous improvement of a population of solutions by maintaining and modifjmg the characteristics of that population over a number of generations, with each new generation has an increasing number of individuals with desirable characteristics than its predecessors had, rather than by adjusting iteratively the parameters of one model to produce a desired result.

Before starting the optimization process with genetic algorithms, an objective fhction needs to be defined as the goal of the search process. Superior solutions in each generation are produced solely by their objective hction. After the objective fhction is defined, the genetic algorithm performs population improvement in the following steps:

1. Population initialization

2. Reproduction

3. Crossover

4. Mutation 5. Repeat steps 2 - 4 ifnecessary

A simple example is presented here to illustrate the concept of genetic algorithms and how the genetic algorithm performs optimization using the above procedures. The objective of the example is to search for a binary string that matches the target string

[1001101]. 42

Objective function (Oi): the objective function is defined as the number of bits in the string that matches the corresponding element in the target string. That is, the ultimate goal is to find a string whose objective function Oi = 7.

Population initialization: an initial population needs to be produced, and the initial values of the &st generation of solutions can be randomly assigned. However, since genetic algorithms produce an individual that is superior to others within the same population, good "seed" members with high quality "genes" in the initial population are essential for genetic algorithms to fhd a more globally optimal solution and find it with less effort. In this example, a population with 4 different strings is initialized randomly (table 3- 1).

Table 3- 1 Population initialization String Index String Objective Function (i ) (Oi) 0- 1 1001011 -5 0-2 0100010 1 0-3 1110101 -4 0-4 1100110 2

Reproduction: this process produces next generation of the population fiom the current population by exchanging the basic building blocks a number of times, proportional to the values of the objective functions, fiom the parent chromosomes to offspring chromosomes. Individuals with higher objective function are given a higher mating rate to ensure higher possibility of producing better or equally good offspring. The individuals with lowest objective function values will not be copied over to the next generation, i.e. they are extinct in the course of evolution. In the example, the two strings (number 1and

3) with highest objective hction values (5 and 4) are reproduced as the parents for developing the superior offsprings for the next generation, and the two strings (number 2

and 4) with the lowest objective hction values (1 and 2) are extinct (table 3-2).

Table 3-2 Reproduction of superior strings String Index String Objective Function (i) (Oi)

Crossover: this process exchanges the genetic materials and passes characteristics fiom the selected parents to the offspring by randomly mating pairs of the individuals in the new generation. The performance of this operator largely decides the quality of the population

of the next generation. First, a random crossover point between 1 and 6 is selected for each pair of strings. This point dictates how many bits on the right end of the string should be exchanged between the two mating pairs. Assume the crossover point is 3 for the first pair, the crossover process and results are shown in figure 3-6

Parent strings Offspring strings

Figure 3-6. Crossover process

Similarly, the crossover point for the second pair is selected as 5, and the new generation created with the objective hction values of its members after the crossover process is shown in table 3-3. Table 3-3 New population after one generation String Index Offspring Objective Function (i) Strings (Oi)

Mutation: this process randomly changes the chromosomes with the intention of forcing chromosomes to explore more solution space and preventing evolutionary dead ends. However, the rate of practicing mutation is usually very low to avoid the destruction of the established genetic structure, therefore, the mutation may not happen in many generations.

After a new generation is produced, each chromosome or possible solution is evaluated by examining the value of the objective functions of how well that solution meets the object. If necessary, the process of reproduction, crossover andlor mutation are repeated until the objective function value reaches the target objective function value, which is 7 in this example. Usually, good results can be achieved within a few generations, and the convergence is even faster when at least one component of the initial population has a "high quality" gene. Table 3-4 shows the reproduction results of the superior strings from the second generation, as well as the random crossover points for each pair of strings (2 for first pair and 4 for second pair).

Table 3-4 Reproduction of superior strings String Index Strings Objective Function Random point (i) (Oi) for Crossover 1- 1 1000101 6 2 1-3 1001001 6 2 1- 1 1000101 6 4 1-3 1001001 6 4 45

Table 3-5 shows the results after two generations. The value of the objective hction for string 2-4 equals to the target objective hction value, and the string matches the target string. The searching process therefore finishes after two generations.

Table 3-5 Results after Two Generations String Index Offspring Objective Function (I) Strings (Oi) 2- 1 1001001 6 2-2 1000101 6 2- 3 1000001 5 2-4 1001101 7

The effective representation (encoding) of the problem to be solved plays an important role in the successfbl implementation of genetic algorithms. The coding method can have significant effect on the accuracy and efficiency of the genetic algorithm. Symbolic encoding represents the type of operators emphasizes information of the relative order of the elements in the permutation; binary encoding representations are more suitable for conventional genetic algorithm operators and they can simplifL the problem to accommodate simpler genetic operators; and direct real-value representations can reveal the nature of the problem in a more thorough way. Because of the scope of this research and the nature of the problems studied in this research, the genetic optimizer is applied in the single-machine problem only. The details of this application is described in chapter 4.

3.5 Summary

The proposed three-module pattern-recognition scheduling system integrates computer simulation technique, dispatching rules, artificial neural networks, and genetic 46 algorithms. The simulation techniques are used to simulate the real-time manufacturing environment to acquire the job data and to evaluate the performance of the scheduling system and different rules for given performance measure. The performances of dispatching rules paired with the patterns of the known job data are used to train the neural networks, and the trained neural networks are used to select the most suitable rule dynamically for a given performance measure based on the patterns revealed by the characteristics of the job to be processed as well as the current state of the manufacturing system. Genetic algorithms, when fbrther optimization is necessary in order to obtain satisfactory scheduling results, are then applied to optimize the ha1 scheduling policy (chapter 4). In the case that genetic algorithms are used, the neural network selection results are used as the initial population for the genetic algorithms in order to reduce the effort to obtain the desired result. Chapter 4 Case Study 1: Single-machine Scheduling Problem

4.1 Description of the Problem

The scheduling problem for a single-machine in a job-shop environment is studied in this research. The machine can be considered as a stand-alone workstation, or a workstation among a group of machines in a flexible manufacturing system with the assumption that the operation of one machine does not affect the processes of other machines. In either case, it is important to make the suitable scheduling decision to control the process properly at the single machine level. The single-machine system studied is shown in Figure 4- 1.

Material Handling

Pattern Recognition Scheduler

Figure 4- 1 A single-machine system 48

The machine processes jobs in a batch mode. There are an input buffer and an output buffer for the machine. AU new jobs to be processed by the machine wait in the input buffer. A robot is used to do the material handling by selecting a job from the ones waiting in the input buffer and setup the machine following the selected scheduling strategy. Once the machine finishes the process, the finished job is unloaded immediately from the machine and put in the output buffer. A scheduler/controller is used to control the process by providing the sequence of the jobs for the machine to process based on the machine status and given performance measure. The objective is to develop a suitable scheduling strategy at each decision making point of the machine as in what sequence the jobs will be processed. The sequence or the priority of the jobs awaiting in the input buffer is arranged according to this scheduling strategy in a way that one of the selected performance measures will be optimized. The framework for the pattern recognition scheduling system described in chapter 3 is utilized to the scheduling of this single-machine system with the following assumptions:

1. The capacity of the input buffer is enough for the batch size of ten (10) jobs. 2. Each job has independent process plan which includes its own arrival time, process time distribution, due date, and job type.

3. The setup time for each job is job type and sequence dependent, i.e. the setup time depends on the preceding job type as well as the current job type. There are total seven (7) possible job types. 4. The output buffer has enough capacity, i.e. the machine can unload its job and be ready for processing next job whenever the process is Eshed. The unload time for a job is included in its processing time. 5. The jobs can be processed at any time within the scheduling time frame, and only one job can be processed at a given time. 4.2 Data Acquisition and Analysis

4.2.1 Job Data

A simulation model is developed in C language to simulate the single-machine system and build the job database by collecting the information that describes the characteristics of the jobs to be processed by the machine as well as the current system status and requirements. For a certain batch, there are seven (7) process plans available for the system and each one corresponds to one job type. The interval of a job arriving in the input buffer follows either Poisson or Exponential distribution, the processing time are normally distributed with different mean and standard deviation values for different job types, and the due date for different process plans are based on a factor following either normal or uniform distribution (table 4-1). The simulator randomly selects a process plan and assign it to the new job coming to the queue. The machine will process the job based on the plan attached to it.

Table 4- 1 Process Plans Process Plan Job Type Arrival Time Processing Time Due Date 1 1 p(25) N(4, 0.2) N(5,2) 2 2 p(50) N(6, 0.3) u(3,4) 3 3 p(45) N(5, 0.2) u(0, 10) 4 4 E(22) N(3, 0.1) N(1, 1) 5 5 E(75) N(10,0.6) N(10,2) 6 6 E(70) N(8, 0.4) N(5,2) 7 7 E(100) N(15, 0.75) U(0, 10) E - Exponential, N - Normal, P - Poisson, U - Uniform

Setup time for different job type may vary. The setup time for a certain job depends on its job type as well as the type of the preceding job processed by the machine, 5 0 i.e. the setup time for a job depends not only on its type but also on the sequence the jobs in the batch are processed. The setup time matrix is shown in table 4-2.

Table 4-2 Setup Time Matrix

Each batch generated from the simulator is featured by the queue size, current time, and job type of the preceding process. The characteristic of a job is described by its arrival time, processing time, due date, and job type which is used to determine the setup time along with its position in the queue. Table 4-3 shows an example of a job queue.

Table 4-3 A S lmple Job Batch 11 Job Number Job Type / Arrival Time 1 Processing Time Due Date I-;- 2 6042 6 6126 ------.- -- -- 10 I 6046 4 6123 Queue Size = 0; Current Time = 6046; Preceding Job Type = 2 4.2.2 Performance Measures

Perfbrmance measures are defined to evaluate the performance of the system In this research, seven scenarios are considered, each one uses a different performance measure as the system performance evaluator These performance measures are: Mean Flowtime (MFT), Maximum Flowtime (MaxFT), Maximum Tardiness (MaxTD), Mean Tardiness (MTD), Work-in-process Inventory (WIP), Machine Utilization

(MACH), and Throughput (TERU). (please refer to section 3.2.3 of chapter 3 for the definitions of these performance measures) The performance measures can be fkther grouped based on the factors that will affect its results the most. Take the performance measure MFT into consideration, for example, by definition, the flow time for a certain job in the queue (positionj) is

where co is the current time when the system was ready to begin processing the batch, i represents the jobs have been processed at this point, including all preceding jobs as well as the current one. The mean flow time for the batch can be calculated as

i--V--J constant for given batch

For a given batch, the current system time and the mean arrival time of all jobs remains same no matter which sequence is used. Therefore, for different sequence, the most important variable factor that will affect the result of MFT is (pi + si). From their definitions, the same conclusion can be drawn for performance measures of MTD and WIP. 52

Through the similar analysis, the seven performance measures are grouped as shown in table 4-4 based on the their primary affecting factors.

Table 4-4 Performance Measure Groups Group I Performance Measures I Affecting Factor(s)

MFT, MTD, WIP Pj + Sj MACH, THRU

A group of selected scheduling rules are built into the neural network rule selection module as candidates for initial selection of the scheduling strategy. In this study, fifteen (15) different representative scheduling rules are selected based on their characteristics and general performance in industrial scheduling problems. These rules are: SPT, LIT, FIFO, LIFO, SST, LST, SPST, LPST, EDD, LDD, mSLACK, MSLACK,

CR, SSLACK, and SLACKIRT (please refer to section 3.2.2 of chapter 3 for the definition of these rules). The above scheduling rules can be divided into two groups, simple rules and combinational rules, based on the number of variables that describe job characteristics, such as processing times, arrival times, setup times, or due dates, are involved in the rules. Simple rules (SPT, LPT, FIFO, LIFO, SST, LST, EDD, and LDD) involve only one of these variables, and combinational rules (SPST, LPST, mSLACK, MSLACK, CR, SSLACK, and SLACWRT) involve two or more of these variables. Simple rules reveal more clearly the impact of a single variable to the performance of the system while the 53 combinational rules indicate the joint impact of multiple variables on the system performance.

4.3 Data Preparation for Pattern Recognition Neural Networks

Seven neural networks are developed as the experts to select the best scheduling rules fiom the candidate rules, each network corresponds to one of the seven performance measures used in this study. These expert networks are trained to select the most suitable rules for a given performance measure by recognizing the patterns included in the job data set. These networks can be divided into different categories based on the performance measure groups listed in table 4-4. Each expert neural network is a three layer, 16-input, 15-output feedforward network trained with the backpropagation algorithm.

The data for training and testing the expert neural networks are generated based on the category of the selected performance measure and the variable(s) that affect this performance. A linear regression model is used to describe the features of the job data and mapping the data patterns into the neural network. The general linear regression model used here is y(t) = bt +a and the coefficients for this model are

and where t represent the sequence index number of jobs (t = 1, 2, ..., n), n is the queue size, and y(t) is the factor that affects the output for the given performance measure. For example, for MFT, y(t) = @, + st), for WIP, y(t) = st, and etc. (see table 4-4) The input data set for the neural networks are formulated from a and b because their values Illy described the character of the linear model. Because the order of a job to be processed will affect the setup time and the output of a and b as well, it is important to consider the effect of different sequence combinations when calculate a and b. For example, consider the performance measure MFT and use the sample job batch in table 4-

3, when the jobs are processed in SPT order the values of a and b are different from those when the jobs are processed in FIFO order (figures 4-2 and 4-3). Different orders of jobs to be processed reflect different setup time, therefore different angles of the character of the jobs. The simple rules SPT, LPT, FIFO, LIFO, SST, LST, EDD, and LDD are integrated in formulating the input data set. Each pair of these rules indicates a single character @, a, s, and d) of the job in a diverse way. The values of a and b for the given performance measure category are calculated for each of these simple rules and total 16 inputs are generated in the input vector for the neural network.

Linear representation for pts in SPT order

Index in SPT order

I I

Figure 4-2 Linear representation of (p,+ st) in the order of SIT I Linear Representation of p+s in Arrival Time Order

Index

Figure 4-3 Linear representation of (p,+ st) in the order of FIFO

To fhther reveal the patterns of the input data sets and to comply with the requirement of backpropagation algorithm, the data generated above are scaled among the whole vector - "vertically" between 0 and 1 based on the actual values of each data set. The outputs of the expert neural network represent the performance of the 15 candidate scheduling rules with each output unit corresponds to the performance of one of these rules for the given performance measure. The relative performance of these rules are ranked between 0 and 1, where 1 represents the best performance and 0 the worst for the given performance measure.

4.4 Implementation of the Expert Neural Network Rule Selector

Before their implementation for the rule selection, the seven expert neural networks are first trained off-line based on the backpropagation paradigm, and the configuration of the networks - the number of hidden units and the weights of the links between the neurons of adjacent layers is determined through the training process so that the network will obtain the ability to select the best rules for a given performance measure 5 6 based on the patterns provided in the given job data set. The number of hidden units may be different for networks designed for different performance measures. The RMS Error is used as the training error measurement. In the neural network training process, the Neuralwindows - a Neural Network Dynamic Link (DLL), and a neural board, the products of Ward Systems Group, Inc., are used to speed up the training process, as the training of a backpropagation network is usually slow in speed. The training process with the neural board, using the same training process for comparison, is two to five times faster than without the board on an IBM compatible computer with an Intel 486DX-33MHz processor. Table 4-5 gives the configuration of the trained network, and the training specification for the expert networks.

Table 4-5 Neural network conl&uration and training specifications

After the neural networks are trained and tested with the satisfactory results, they are implemented for on-line rule selection. For a given performance measure, each set of input data is fed into the appropriate expert network, and the network then makes the best judgment in selecting the best suitable rules based on what it has been trained and the pattern in the input data. After the rule is selected, the sequence of the jobs in the queue is rearranged according to this rule and the performance of the system is evaluated by 57 calculating the performance specification based on the new process sequence for the specific performance measure.

Take the sample data in table 4-3 as an example, the simulation results of the 15

candidate rules for all 7performance measures are shown in table 4-6, and the selection results of the expert neural networks are given in table 4-7.

Table 4-6 Simulation results of 15 scheduling rules for the example

Table 4-7 Results of 7expert neural network rule selectors for the example Performance Results Rule Selected Job Sequence Measure Ma- 117.000 FIFO 12345678910 MFT 65.800 SPST 51078692341 MaxTD 5.000 EDD 32578461091 MTD 0.500 EDD 32578461091 WIP 4.221 SSLACK 78510693421 MACH 0.829 SST 95101267834 THRU 0.122 SST 95101267834 58 In table 4-6, the values of the performance measures for the best performed rules are indicated with bold face numbers. In this example, the neural networks have selected the best rules for all performance measures except WIP, for which the second best performed rule is selected.

4.5 Optimization with Genetic Algorithms

After the scheduling policy is selected by the neural network, the evaluation module decides, based on the satisfactory level of the perfbrmance provided by the selected rule, that fbrther optimization on the schedule is needed. This optimization is carried out by the genetic optimizer. The genetic algorithm procedures are as follows:

1. Randomly generate a population of 50 legal sequences as the initial generation, and 4 of these sequences are substituted by the sequences of the 4 top rated schedules

selected by the expert neural network for the given performance measure. These 4 sequences are used as the "seed" schedules to provide some high quality "genes" to ensure that the genetic optimizer has a high possibility of generating better new

generation of solutions. In most of the situations, the top 4 schedules ranked by the

expert neural networks will cover the best performed rules.

2. Evaluate each sequence generated by the optimization process by using a fitness hction. Here the fitness fimction is the performance measure that the genetic

algorithm is going to optimize, i.e. Fitness; = Performance-Measure,.

3. Select the best 25 (half of the population size) sequences fiom the population for reproduction, based on the values of the fitness hction for each sequence.

4. Reproduce the selected sequences by duplicating each sequence. 5. Apply the crossover operator. The order crossover operator developed by Syswerda (1991) is utilized. First pairs of sequences are selected randomly as parents, then this 5 9 operator randomly selects several elements as crossover points. These selected elements of one of the parents are forced to orderly cover the other parent's selected element position. Consider the following two sequences A and B as an example:

Position: 12345678910 SequenceA: 1 2 3 4 5 6 7 8 9 10

SequenceB: 5 10 7 8 6 9 2 3 4 1

Randomly selected crossover point: 2, 4, 7, 9

Position: 12345678910 SequenceA: 1 2 3 5 6 Z 8 9 10 SequenceB: 5 10 7 8 6 9 2 3 4 1

The selected elements for sequence B: 10, 8, 2, 4 and the selected elements' position in sequence A is 2, 4, 8, and 10. After crossover the offspring #1 is

Position: 12345678910 Offspring#l:l 10 3 8 5 6 7 2 9 4

Similarly, the selected elements for sequence A: 2, 4, 7, 9 and the selected elements' position in sequence B is 3, 6, 7, 9, and the offspring #2 after the crossover is

Position: 12345678910 Offspring#2: 5 10 2 8 6 4 7 3 9 1 6. Apply mutation ifnecessary. Mutation is applied with a very low (5 0. I). 7. Repeat steps 2 - 6 ifnecessary until the no more better result can be obtained.

Consider again the same example given in table 4-3, the sequences of the final schedules after applying the genetic algorithm are shown in table 4-8. It can be seen ftom the sample results that the final sequences for some of the performance measures are not the same with any of the candidate scheduling rules (compare with tables 4-6 and 4-7).

Table 4-8 Results of the genetic optimizer for the example Performance Results Job Sequence Measure MaxFT 117.000 12 3 4 5 6 7 8 910 MFT 65.800 5 10 7 8 6 9 2 3 4 1 MaxTD 0.000 10 2 3 4 5 7 8 6 9 1 MTD 0.000 10 4 3 2 5 8 7 6 9 1

MACH 0.829 9 510 12 6 7 8 3 4 TEIRU 0.130 9 6 8 7 13 4 210 5 I

4.6 Analysis of the Results

Some dispatching rules have the reputation that they can provide fairly good results to the single-machine scheduling problem for certain performance measures. However, the schedule generated by a rule is not always the best for every situation throughout the course of production because of the dynamic changes of system state. Therefore, backpropagation neural networks are utilized to select dynamically the appropriate rule that fits the current system specification and performance requirement, and genetic algorithm is applied to fiuther optimize the scheduling strategy. Tables 4-9 shows a comparison of the mean valuess of the overall performance results among the meen candidate scheduling rules, the expert neural network rule selection modules, and the genetic optimization module for 100 testing samples and for all seven performance measures.

Table 4-9 Comparison of the overall performance among scheduling rules, neural networks, and genetic algorithm for 100 test samples Scheduling Performance Measures II Strategy MaxIT / MFT / MaxTD 1 MTD 1 WIP 1 MACH 1 THRU SPT

LIFO SST LST SPST LPST EDD LDD &LACK MSLACK

SSLACK 722.400 255.364 574.960 166.568 4.73 1 0.842 0.122 SLACIURT 713.440 263.596 580.720 170.444 5.413 0.803 0.117 NN 671.206 253.108 544.520 160.018 4.619 0.902 0.134 p------NN + GA 670.800 252.948 544.320 159.372 4.588 0.903 0.134

The results fiom neural networks are generally better than any of the candidate desacted alone. A well trained neural network will always examine the system state and the candidate rules at a decision point, and select the rule that is most suitable for the current system state, rather than apply the same rule over and over. This guarantees that the best scheduling strategy is always applied for any given situation. Other benefits of using neural networks include the learning ability of the neural networks. If the system configuration is changed or new rules are available, the neural networks can be re-trained 62 0ff-he to learn the new situation and then implemented with the new howledge they have learned. Although the training process of the backpropagation neural networks are relatively slow in time and need more effort to make them converge, once trained, however, they can generate robust results, and the execution of these neural networks is fast enough for real-time implementation. Even though a well trained neural network at most of the time can guarantee that it selects the best rule from the available ones for a given situation (in many situations it is good enough while in others it is not), the limitation of the available rules can still prevent it from generating an overall best schedule because there are situations that the best schedule may not be any of the ones that are provided by available rules, rather, it is a "not-existing-rule" schedule. Genetic algorithms are applied to find such schedule in order to fbrther improve the perfbrrnance, since the genetic algorithm can search more thoroughly in the solution space, in some cases the improvement is quite sigtllficant (tables 4-8,4-9). When the genetic algorithms are integrated into this scheduling system, the expert neural networks work as a preliminary rule selector that provides the genetic algorithms with the sequences from the best scheduling rules it selected based on its best judgment. These sequences are used as the "seed sequences for the genetic algorithm that serve dual purposes: (1) provide some high quality "genes" in the initial population to guarantee that

the future generations will produce at least the sequence of the same quality, if not better; (2) Reduce the execution time and the number of iterations needed to reach the final solution. In this research, with 4 of the top rules selected by the neural network passing to the genetic algorithm, the average number of iterations are cut by more than 62%, comparing with the same process without using the high quality "seeds". From this research, and the simulation results shown in the above tables, it can be concluded that the integrated scheduling method using pattern recognition scheduling 63 techniques integrating backpropagation neural networks and using genetic algorithms as the scheduling strategy optimizer does provide a practical way to the solution of the single-machine scheduling problem that can generate better results than a scheduling rule works alone. Chapter 5 Case Study 2: Multiple-machine Scheduling Problem

5.1 Description of the Problem

The single-machine scheduling problem was studied in chapter 4. While the single- machine scheduling problem is important in revealing the fimdamental concept and basic rules of manufacturing scheduling, multiple-machine scheduling problems are more realistic and complex, and are more often encountered in scheduling of manufacturing systems. Therefore, a multiple-machine scheduling problem is studied in this research. Consider the system as a multiple-machine flexible manufacturing cell which includes ten (10) workstations, one material handling robot, an input buffer and an output buffer for the cell, and each machine has its own input and output buffer (figure 5- 1).

Maclune Machine Output Buffer Input Buffer 7 I -7-

Cell Input Buffer Cell Output Buffer Pattern Recognition Scheduler

Figure 5- 1 A 10-machine FMS system 65 In this FMS system, the material handling robot is responsible for the material handling tasks which include the transportation of unprocessed pans fiom the system input buffer to specific machine and partially hished parts between different machines within the manufacturing cell, and shipping completed parts out of the system (to the system output buffer). Each of the ten machines has an input buffer that has enough capacity to hold all the parts awaiting for process on this machine and an output buffer that has enough capacity to allow the machine to unload the job whenever it has finished processing and be ready for the next job to be processed on it. The input buffer of the cell has the capacity to hold a whole batch ofjobs and the cell output buffer can take the jobs completed by the cell and not to cause the FMS cell to wait in order to unload the jobs. The manufacturing cell processes jobs in a batch mode. Each batch has multiple jobs need to be processed, and each job has multiple operations. Each operation is performed on a different machine, and each operation has its own required processing time. The process of an operation depends on the completion of related operations and the availability of the machine that will perfom the operation. The robot selects a job from available candidates and transports it to the specific machine following a certain dispatching rule according to the requirement of the performance measure. The available candidate jobs are those waiting in the output buffers of the machines and those entering into the system, and the destination of the robot is the input buffer of each machine. The machines then will process the jobs available in their input buffers following the same dispatching rule selected for the system based on the performance measure requirement. The dispatching rule selected can be different at diEerent point of time, according to current state of the system The objective for this research is to apply the fiamework for the pattern recognition scheduling system described in chapter 3 as a way to solve the multiple- machine scheduling problem. The system will decide, at certain point of time, a scheduling 66 strategy that is most suitable for the current system state so that the jobs will be arranged and processed among the machines in a way that the required performance measure will be optimized throughout the course of the batch job process.

5.2 Data Acquisition and Analysis

5.2.1 Job Data

The jobs are processed in the FMS system in a batch mode. For a certain batch, there are 15 - 25 jobs and each job has multiple operations (1 - 10) to be processed by different machines. Some jobs may only need to be processed on some of the 10 machines but not on all, while others will need to be processed on all the machines in the system The definition of a certain job follows one of the ten (10) different process plans available for this problem (table 5- I), and each job in a batch is assigned a process plan. Each one of the 10 process plans indicates the number of operations required for the job to complete, the machine required to process each operation and the process time for each operation. The setup time of the job on different machine for different operation can be different. To simplifjr the problem, the setup time is combined into the processing time for each operation.

Table 5- 1 Process Plans Plan # # of Operations Operation # Machine # Process Time 1 7 1 4 4 2 3 6 3 1 8 4 2 5 5 10 4 6 9 5 7 8 8 I Table 5- 1 Process Plans (continue) L i Plan # # of Operations Operation # Machine # Process Time 2 9 1 3 6 2 1 8 3 9 5 4 4 6 5 8 6 6 7 4 7 5 7 8 6 3 9 10 4 3 10 1 1 4 2 2 5 3 4 6 4 3 6 5 10 4 6 5 7 7 6 4 8 9 5 9 8 6 10 7 4 4 8 1 3 6 2 2 5 3 4 6 4 5 7 5 9 5 6 6 4 7 7 4 8 8 6 5 10 1 2 5 2 10 4 3 4 6 4 5 7 5 6 3 6 7 4 7 3 6 8 1 4 9 8 6 10 9 5 Table 5- Plan # 6

7

8

9 Table 5-1 Process Plans (continue) Plan # # of Operations Operation # Machine # Process Time 10 7 1 4 6 2 6 4 3 5 7 4 9 5 5 10 4 6 1 4

w- 7 2 5

A simulation program is developed in C language to simulate the multiple-machine system and generate the information pertaining the system The simulator randomly assigns a process plan fiom the available plans to a job within the batch. The job database is built with the information that describes the characteristics of the jobs in the batch to be processed by the FMS cell The information includes the number ofjobs in the batch (batch size), job type, the current system time (ready time), the due date, number of operations for a job to complete, the machine required for processing each operation, and the processing time required for each operation to complete. Table 5-2 gives a sample job data for a batch of 15 jobs. Due Date = 149

Due Date = 147 Due Date = 118 Job # ob Type = 5 1 5 2 Opt = 10 2 4 10 Leady Time = 0 3 6 4 he Date = 154 4 7 5 5 3 6 6 4 7 7 6 3 8 4 1

10 5 9 ob Type = 8 1 8 1 Opt = 8 2 8 8 ,eady Time = 0 3 6 4 he Date = 50 4 6 3 5 4 10

)ue Date = 71

heDate = 152 Due Date = 52

5.2.2 Performance Measures

As in the single-machine scheduling problem, performance measures are needed to evaluate the performance of the multiple-machine scheduling system. In this problem, two performances measures are used to evaluate the performance of the selected scheduling strategy, these two performance measures are:

MFT (Mean Flowtime) MTD (Mean Tardiness)

(Refer to section 3.2.2 of chapter 3 for the detail definition of these performance measures.) 5.2.3 Scheduling Rules

In this study, A group of nine (9) scheduling rules are selected as candidate rules through a preliminary rule selection process based on the rules' characteristics and general performance in industrial scheduling problems. These scheduling rules are built into the neural network rule selection module as candidates for selection of the scheduling strategy. These rules are:

SPT (Shortest Processing Time) LPT (Longest Processing Time) FlFO (First In First Out) EDD (Earliest Due Date) LWR (Least Work Remaining) DSLACK (Dynamic Slack Time) SLACKIRT (Job Slack Ratio) CR (Critical Ratio) LNOR (Least Number of Operations Remaining)

(Refer to section 3.2.2 of chapter 3 for the detail definition of these rules).

5.3 Data Preprocess for Neural Networks

Two similarly configured neural networks are developed as the scheduling rule selectors to select the most suitable scheduling strategy (rule) from the available candidate rules. One of the neural networks is corresponding to the performance measure MFT and 75 the other is for MTD. These networks are trained so that they are able to choose the most suitable rule fiom the candidates to optimize the given performance measures throughout the course of the processe. Both neural networks are 3-layer, 12-input, 9-output feedfonvard networks trained with the backpropagation paradigm (refer to section 3.3 for the details of this paradigm).

To obtain a well built and well trained neural network for each performance measure with given candidate scheduling rules, the creation of the effective training data set is very critical. The input vectors of the training data should represent the characteristics and nature of the batch to be processed, and reveal the characteristics that differentiate one batch fiom another. Therefore, the input data must include the attributes that reveal the unique characteristics of each batch. Since the current state of the system (e.g. the availability of machines) plays another critical role in the selection of the scheduling strategy for a given performance measure, the neural network input data also should be built in with the information concerning the system state. The input data set for training the neural network is composed of the following entities:

1. Number ofjobs in the system (system load). 2. Mean [(duedate - present-time - total-working-time)/total-working-time] for all jobs. 3. StdDev[(due-date - presenttime - total~working~time)/total~workinggtirne] for all jobs. 4. Mean[machine-load/Max(machineeload)] for all machines (by job).

5. StdDev[machine-load/Max(machineeload)] for all machines (by job).

6. Mean(Slack-time) for jobs in the system. 7. Min(Process-time) for jobs in the system.

8. Mean(Process-time) for jobs in the system. 9. Number of remaining operations in the system

10. Mean(Due- date).

- 1 1. Meanrmachine load/Max(machine- load)] for all machine (by machine).

12. StdDevrmachine- for all machines (by machine).

Because a backpropagation neural network requires that the values of the data sets be in the range of 0 - 1, the input data for the training process is scaled between 0 and 1 'trertically" - by columns among the data set in the data space. Different scaling techniques are applied in scaling the data sets. For the data whose values are distributed close to uniform, a linear scaling is used, and for the those data whose values are distributed close to normal distribution, the 3-0 rule is applied to get the minimum and maximum values for the scaling. The output vector (targets) of the neural network training data set is composed of the performance of the scheduling rules selected for the given performance measure

(section 5.2.2 and section 5.2.3). There are total 9 outputs, each one corresponding to the performance of one of the scheduling rules. The output data set is scaled so that the value of each output is between 0 and 1 according to the requirements of the backpropagation paradigm In this research, in order to speed up the training process and increase the generalization of the training results, the output vector is scaled between 0.1 and 0.9.

5.4 Implementation of th e Neural Network Scheduler

The implementation of the neural network scheduler requires the training of the networks and the actual use of the trained networks to schedule the jobs. The neural networks are trained using the backpropagation paradigm, and the configuration of the networks - the number of hidden units and the weights of the links between the neurons of adjacent layers is determined through the training process so that the network will acquire the knowledge and the generalization ability necessary to select the best rules for a given performance measure based on the patterns provided in the given job data set. The number of hidden units may differ between networks designed for different performance measures.

The RMS Error is used as the error measurement in the training process. The configuration for the final trained neural networks are given in table 5-3.

Table 5-3 Neural network configurations Performance # Inputs # Hidden # Outputs Training Epochs Measure RMS MFT 12 10 9 0.079 62000 MTD 12 10 9 0.083 56000

Neuralwindows - a Visual Basic Neural Network Dynamic Link Library (DLL), and a neural board, the products of Ward Systems Group, Inc., are used in the training process in order to make the training process faster, as the training of a backpropagation network is usually slow in speed. After the neural networks are trained and properly tested with the satisfactory results, they are ready to be implemented for the rule selection tasks. Table 5-4 shows the simulation results of the 9 scheduling rules and the trained neural network selection of the rules for MFT and MTD for the sample job batch indicated in table 5-2. In this example, the best performed rule for a given perfbrmance measure is selected. Sometimes the network may not select the best rule but the second or third best one.

Table 5-4 Simulation results of 9 rules and neural network selection SPT LPT FIFO EDD LWR DSLACK SLACKIRT CR LNOR NN Selection MFT 9.40 10.00 9.73 9.87 9.93 9.27 9.33 9.13 10.00 CR MTD 24.40 24.33 18.40 7.13 20.33 8.93 10.00 8.93 21.40 EDD 5.5 Analysis of the Results

In this research, neural networks were used to make the decision in selecting the suitable scheduling strategy for the multiple-machine scheduling problems. Efforts were focused on reducing the mean tardiness and mean flow time individually for the course of batch production. Table 5-5 lists the simulation results for 1000 sample batches when schedule the process with a single scheduling rule as well as make the scheduling decision dynamically using the neural networks for both of the performance measures MFT and MTD for the multiple-machine scheduling problem The table shows the number of times and the percentage that a scheduling method provides the best scheduling strategy throughout the course of process.

Table 5-5 Performance of scheduling rules and neural networks I SPT I LPT I FIFO I EDD I LWR IDSLACK I SLA~TI CR l LNOR 1 NN 11

MTD 4 0 6 493 5 172 176 209 6 651 0.4% 0.0% 0.6% 49.3% 0.5% 17.2% 17.6% 20.9% 0.6% 65.1%

From the table, it can be seen that FIFO provides the best schedule (winning rate 29.4%) among all rules for the performance measure MFT and EDD is the best rule (winning rate 49.3%) among all for MTD if one-rule method is used in making the scheduling decisions. However, comparing with the results generated by the neural networks (winning rate 63.0% for MFT and 65.1% for MTD respectively), it can be seen that the dynamic method using neural networks does provide better prediction and made 79 better decision on the schedule of the jobs than the single-rule method for both MFT and MTD. (Figure 5-2 and figure 5-3 show that same results in the form of a bar chart)

I MFT

Scheduling Method

Figure 5-2 Performance of rules for MFT

MTD

I Scheduling Method

Figure 5-3 Performance of rules for MTD 80

The pattern recognition neural network method appears superior than the scheduling rules in this case because the neural networks dynamically make the decision of which scheduling strategy to use based on the current system state as well as the characteristics of the jobs currently in the system. It adjusts the decisions fiom time to time in order to optimize the performance measure, rather than rely on one ''%best rule" all the time. A rule can perform better than any other rules for a certain condition, but it is not necessarily that the rule will perform at the same level for other situations. Therefore, dynamically adjusting the scheduling strategy according to the changing system status is critical to obtain an overall satisfactory scheduling results for given performance measure throughout the whole production period, and that is exactly how the neural networks worked in this multiple-machine scheduling case. Although the neural networks performed better than any single scheduling rule worked alone for the given performance measures, the performance was not as good as we expected. The reason could be that the neural network did not learn all it need to learn during the training process in order to generate better results, or the input representation of jobs did not reveal all the characteristics that are needed to differentiate one job fiom another. To improve this, more study on neural network input representation may needed in future research. Chapter 6 Conclusions

6.1 Conclusions

In the research presented in this thesis, a pattern-recognition approach for the scheduling of dynamic mufacturing systems was studied, and a three-module artificial neural network based intelligent pattern-recognition scheduling system was developed. This system integrated computer simulation, artificial neural networks, dispatching rules, and genetic algorithms. The neural networks were used to select the most suitable scheduling strategy for a given performance measure based on the patterns revealed in the data that represents the current state of the system (machines, jobs, performance requirements, etc.). The performance of this system was studied through its applications, with variation, in the scheduling of a single-machine manufacturing system with process order dependent setup times and a multiple-machine scheduling problem. The simulation results fiom both the single-machine and the multiple-machine cases showed that the artificial neural network based integrated pattern-recognition scheduling method performed generally better than any dispatching rule acted alone for given performance measures. In the single-machine case, since some rules performed fairly well for certain given performance measures, in order to hrther improve the performance of the scheduling system, genetic algorithms were used together with the neural networks. The genetic algorithms fiuther optimized the scheduling strategies selected by the neural networks rule selector. One of the advantages of the neural network based pattern-recognition method is that the neural networks can make the decision of which scheduling strategy to use 82 d~amicallyby examining at each decision point the current state of the manufacturing system as well as the patterns revealed in the jobs to be processed by the system. The system adjusts the scheduling decisions fiom time to time by selecting the rule that is most suitable for the given performance measure in a particular situation, rather than apply the same '%best rule" throughout the whole course of production, since a rule can not be the '%best" all the time. This guarantees that the scheduling strategy selected at a decision point was always the most suitable one for the given situation (with the condition that the neural networks were well constructed and well trained so that they could recognize the correct patterns revealed in the provided data that reflect the current state of the system). Another advantage of the neural network based method is its ability to learn. Once a neural network is properly configured and trained, it can learn and retain the knowledge exposed to it during the training process and build up an ability to generalize the knowledge it has learned. When a new job with similar pattern appears, the neural network can recognize it without dif?iculty. In this way the neural networks can provide robust performances and be used for dynamic scheduling situations. If the system codguration is changed, or new rules are available, or jobs with new characteristics that are hdamentally different fiom those learned during the training process are added, or a neural network decision mistake needs to be corrected, the neural networks can be re- trained off-line with the new data to learn the new situations without interrupting the production process. The re-trained networks then can be implemented to deal with the new situation. It can be concluded fiom the results of this research that the neural network based pattern-recognition method does have the potential to provide a practical way to the solution of the dynamic manufacturing scheduling problems. Furthermore, the scheduling system with effective integration of artificial neural networks, genetic algorithms, computer simulation techniques, and traditional scheduling approaches such as dispatching 83 rules is able to create the environment that each technique complements the other to provide the required level of "intelligence" to respond to the dynamic change of the manufacturing environment in a timely and effective manner.

6.2 Issues and Future Works

Although the simulation results obtained through the use of pattern recognition neural networks for both the single-machine and multiple-machine scheduling problems were better than the performance of individual scheduling rules for a given performance measure, there are several issues. One of the issues is that in the multiple-machine scheduling problem, the training process is slow and the training error is relatively high and the performance of the neural network rule selector is not as expected. This can be due to the training data representation, the network selected, or the software (Neural Windows) used in the training process. When a new data is added, the network has to be retrained using the new data as well as the old data sets. To improve the performance of the system, future works may include:

More research in the formulation of artscial neural network training data sets (inputloutput) to make them more effectively represent the unique characteristics of the jobs and the current status of the manufacturing system, especially in the case of multiple-machine scheduling.

More research in finding a neural networks that is fast to train and can learn new data without retraining old data that were already used in previous training.

Develop of find a better program for the training process. 84 a The possible integration of genetic algorithms in solving the multiple-machine scheduling problem References

Afentakis, P., "Maximum Throughput in Flexible Manufacturing Systems," Proceedings of the Second ORSNTIMS Conference on Flexible Manufacturing Systems, K. Stecke and R. Suri (Eds.), Elsevier, 1986, pp. 509-520. Alptekin ,S. and L. Rabelo, "Expert System Applications in CIM," presented at the ORSNTIMS National Conference, Denver, Colorado, October 25, 1988. Arizono, I., A. Yamamoto, and H. Ohta, "Scheduling for Minimizing Total Actual Flow Time by Neural Networks," International Journal of Production Research, Vol. 30, No. 3, 1992, pp. 503-511. Badami, V., and C. Parks, "A Classifier Based Approach to Flow Shop Scheduling," Computers and Industrial Engineering, Vol. 2 1, 199 1, pp. 329-333. Baker, K. R, Introduction to Sequencing and Scheduling, John Wiley & Son, New York, 1974. Barto, A. G., S. Bradtke, and S. Singh, "Learning to Act Using Real-Time Dynamic Programming," CMPSCI Technical Report, 93-02, January 1993. Blackstone, J., D. Phillips, and G. Hogg, "A state-of-the-art survey of dispatching rules for manufacturing job shop operations," International Journal of Production Research, Vol. 20, No. 1, 1982, pp. 27-45. Blazewicz, J., G. Finke, R. Haupt, and G. Schmidt, "New trends in machine scheduling," European Journal of Operational Research, V. 37, 1988, pp. 303-3 17. Brandimarte, P., W. Ukvich, and A. Villa, "Factory Level Aggregate Scheduling: A Basis for Hierachical Approach," Proceedings of the 3rd International Conference on Computer Integrated Manufacturing, IEEE Computer Society Press, May 1992, pp. 393-402. Bryne, W. "Alternating Minimization and Boltzmann Machine Learning," IEEE Transactions on Neural Networks, Vol. 3, No.4, July 1992. Burke, L., "Assessing a Neural Net Validation Procedures," PC AI, MarchIApril 1993, pp.20-24. Buxey, G., "Production scheduling: rpactice and theory," European Journal of Operational Research, V. 39, 1989, pp. 17-31. Caramanis, M. C., "Development of a Science Base for Planning and Scheduling Manufacturing Systems," Proceedings of the 1992 NSF Design and Manufacturing Systems Conference, 1992, pp. 833-836. Came, A. and A. Petsopoulos, "Operation Sequencing in a FMS," Robotica, Vol. 3, 1985, pp. 259-264. Chandra, J. and J. Talavage, "Intelligent Dispatching for Flexible Manufacturing," International Journal of Production Research, Vol. 29, No. 11, 1991, pp. 2259-2278. Chang, E., Lippman, R, and Tong, D., "Using Genetic Algorithms to Select and Create Features for Pattern Classification," Proceedings of the International Joint Conference on Neural Networks, 3, 1990, pp. 747-752. 86 Chang, Y. and Sullivan, R, "Real-Time Scheduling of FMS," presented at TIMSIORSA San Francisco Meeting, May 1984. Cheng, T. C. E., and M. C. Gupta, "Survey of scheduling research involving the due-date determination decisions," European Journal of Operational Research, V. 38, 1989, pp. 156-166. Chryssolouris, G., M. Lee, J. Pierce, and M. Domorese, "Use of Neural Networks for the Design of Manufactwing Systems," Manufacturing Review, Vol. 3, No.3, 1990, pp. 187- 194. Chryssolouris, G., M. Lee, and M. Domroese, "The Use of Neural Networks in Determining Operational Policies for Manufacturing Systems," Journal of Manufacturing Systems, Vol 10, No. 2, pp. 166-175. Conway, R., "Priority Dispatching and Work-in-process Inventory in a Job Shop," Journal of Industrial Engineering, Vol. 16, 1965, p. 228. Davis, L., "Job Shop Scheduling with Genetic Algorithms," Proceedings on an International Conference on Genetic Algorithms and Their Applications, IEEE, 1987, pp. 23 1-236. Davis, L., Handbook of Genetic Algorithms, Van Nostrand Reinhold, 1991. Davis W. and A. Jones, "Issues in real-time simulation for flexible manufacturing systems," Proceedings of the European Simulation Multiconference, Rome, Italy, June 7-9, 1989. Doctor, S. R., T. M. Vavalier, and P. J. Egbelu, "Scheduling for Machining and Assembly in a Job-shop Environment," International Journal of Production Research, Vol. 3 1, No. 6, 1993, pp. 1275-1297. Doulgeri, Z., G. D'alessandro, and N. Magaletti, "A Hierarchical Knowledge-based Scheduling and Control for FMS's," International Journal of Computer Integrated Manufacturing, Vo1.6, No. 3, 1993, pp. 191-200. Eaton, H. A. C. and T. L. Olivier, "Learning Coefficient Dependence on Training Set Size," Neural Networks, Vol. 5, 1992, pp. 283-288. Eilon, S. and I. Choudury, "Experiments with SI rule in Job Shop Scheduling," Simulation, Vol. 24, 1975, p. 45. Elvers, D., "The Sensitivity of the Relative Effectiveness of Job Shop Dispatching Rules With Various Arrival Distributions," A.I.I.E. Transactions, Vol. 6, 1974, p. 41. Emmons, H., "One machine sequencing to minimize mean flowtime with minimum number tardy. " Naval Research Logistics Quarterly, V. 22, 1975, pp. 585-592. Foo, Y. and Takefbji, Y., "Integer Linear Programming Neural Networks for Job-Shop Scheduling," Proceedings of the IEEE International Conference on Neural Networks, published by IEEE TAB, 1988, pp. I134 1-11348. Foo, Y. and Y. Takefbji, "Stochastic Neural Networks for Solving Job-Shop Scheduling: Part 2. Architecture and Simulations," Proceedings of theIEEE International Conference on Neural Networks, published by IEEE TAB, 1988, pp. II283-II290. French, S., Sequencing and Scheduling: An Introduction to the mathematics of the Job-shop, Ellis Honvood Limited, England, 1982. 87 Freses, S. D., "A simple simulation for scheduling in a flexible manufacturing system." Proceedings of the 1987 Winter Simulation Conference, 1987, pp. 654-658. Fu, L. and P. Liu, "Hierarchical Dynamic Scheduling for a Flexible Manufacturing System," Proceedings of the 3rd International Conference on Computer Integrated Manufacturing, IEEE Computer Society Press, May 1992, pp. 393-402. Grabot, B. and L. Geneste, "Dispatching Rules in Scheduling: a Fuzzy Approach," International Journal of Production Research, V. 32, No. 4, 1994, pp 903-915. Hershauer, J. and J. Ebert, "Search and Simulation Selection of a Job Shop Scheduling Rule," Management Science, Vol. 2 1, 1974, p. 883. Hershauer, J. C. and R J. Ebert, "Search and Simulation Selection of a Job-shop Sequencing Rule," Management Science, Vol. 21, No. 7, March 1975, pp. 833-843. Hodson, A., A P. Muhlemann, and D. H. R. Price,, "A Microcomputer Based Solution to a Practical Scheduling Problem," Journal of the Operational Research Society, V. 36, 1985, pp. 903-913. Hoitomt, D., P. B. Luh, S. Bailey, and S. LoStocco, "A Practical System for Scheduling Manufacturing Job Shops," Proceedings of the 3rd International Conference on Computer Integrated Manufacturing, IEEE Computer Society Press, May 1992, pp. 393-402. Holland, J. Adaptation in Natural and ArtzJicial Systems, University of Michigan Press, 1975. Holter, T., X. Yao, L. Rabelo, A. Jones and Y. Yih, "Integration of Neural Networks and Genetic Algorithms for an Intelligent Manufacturing Controller", Computers & Industrial Engineering, Vol. 29, No. 1-4, pp. 211-215, 1995 Hopfield J. and D. Tank, "Neural computation of decisions in optimization problems", Biological Cybernetics, Vol. 52, 1985, pp. 14 1- 152. Hutchinson, J., K Leong, D. Snyder, and P. Ward, "Scheduling for Random Job Shop Flexible Manufacturing Systems," Proceedings of the Third ORSAITIMS Conference on Flexible Manufacturing Systems, edited by K. Stecke and R. Suri, Elsevier, 1989, pp. 161-166. Jain, P. K and C. T. Mosier, "Artificial Intelligence in Flexible Manufacturing Systems," International Journal of Computer Integrated Manufacturing, Vo1.5, No. 6, 1992, pp. 378-384. Johnston, M. D. and H. M. Ado* "Scheduling with Neural Networks -- the Case of the Hubble Space Telescope," Computers Operations Research, Vol. 19, No. 314, 1992, pp. 209-240. Jones, A. and A. Saleh, "A Multi-levemulti-layer Architecture for Intelligent Shop Floor Control," IJCIM, Special Issue on Intelligent Control, 3, 1, 1990, pp. 60-70. Jones, A. and C. R McLean, "A Proposed Hierarchical Control Model for Automated Manufacturing Systems," Journal of Manufacturing Systems, Vol. 1, 1986, pp. 15-25. Jones, A. and L. Rabelo, "Real-Time Decision Making Using Neural Networks, Simulation, and Genetic Algorithm," Proceedings of the International FAIM'92 Conference, Fairfax, VA, 1992. Karr, C., "An Introduction to Genetic Algorithms," Proceedings of the Second Workshop on Neural Networks: Academic/Industrial/NASA/Defence,199 1, pp. 667-675. Kennedy, M. P. and L. 0. Chua, "Neural Networks for Nonlinear Programming," IEEE Transactions on Circuits and Systems, Vol. 35, No. 5, May 1988. Keyvan, S., A. Durg, and L. Rabelo, "Evaluation of the Performance of Various Artificial Neural Networks to the Signal Fault Diagnosis in Nuclear Reactor Systems," Proceedings of the 1993 Intnational Conference on Neural Networks, San Francisco, California, 1993. Kiran, A. and S. Alptekin, "Scheduling Jobs in Flexible Manufacturing systems," NBS Special Publication No. 724, 1986, pp. 393-400. Kiran, A. and S. Altpekin, "A Tardiness Heuristic For Scheduling Flexible Manufacturing Systems," 15th Conference on Production Research and Technology: Advances in Manufacturing Systems Integration and Processes, University of California at Berkeley, Berkeley, California, January 9- 13, 1989, pp. 559-564. Kosko, B., Neural Networks and Fuzzy Systems, Prentice-Hall, Inc., 1992. Krucky, J., "Fuzzy Family Setup Assignment and Machine Balancing, " Hewlett- Packard Journal, June 1994, pp 5 1-64. Kusiak, A., "Scheduling Automated Manufacturing Systems: A Knowledge-Based Approach," Proceedings of the Third ORSAITIMS Conference on Flexible Manufacturing Systems: Operations Research Models and Applications, Cambridge, Massachusetts, Elsevier Science Publishers B. V., 1989, pp. 377-382. Lacher, R. C., S. I. Hruska, and D. C. Kuncicky, "Back-propagation Learning in Expert Networks," IEEE Transactions on Neural Networks, Vol. 3, No. 1, January 1992, pp. 62-72. Law, A., "Statistical Analysis of Simulation Output Data," Operations Research, Vol. 56, No. 6, 1983, pp. 983-1029. Leong, G. K. and M. D. OM, "A Sequencing Heuristicfor Dependent Setups in a Batch Process Industry," OMEGA, V. 18, 1990, pp. 283-297. Levin, E., R. Gewirtzman, and D. F. Inbar, "Neural Network Architecture for Adaptive System Modeling and Control," Neural Networks, Vol. 4, April 1994, pp. 185-191. Lo, Z. and B. Bavarian, "Scheduling with Neural Networks for Flexible Manufacturing Systems," Proceedings of the 1991 IEEE International Conference on Robotics and Automation, Sacramento, California, 1991, pp. 818-823. Ma, C. Y. and M. A. Shanblatt, "Linear and Quadratic Programming: Neural Network Analysis," IEEE Transactions on Neural Networks, Vol. 3, No. 4, July 1992. Maccarthy, B. L. and J. Liu, "Addressing the Gap in Scheduling Research: A Review of Optimization and Heuristic Methods in Production Scheduling," International Journal of Production Research, Vol. 3 1, No. 1, 1993, pp. 59-79. Matsuura, H., H. Tsubone, and M. Kanezashi, "Sequencing, Dispatching, and Switching in a Dynamic Manufacturing Environment," International Journal of Production Research, Vol. 3 1, No. 7, 1993, pp. 1671- 1688. 89 Melnyk, S. A., S. K Vickery, and P. I. Carter, "Scheduling, sequencing, and dispatching: alternative perspectives," Produciton and Inventory Management, V. 27, 1986, pp. 58-68. Messa, K, "Fitting Multivariate Functions to Data Using Genetic Algorithms," Proceedings of the Second Workshop on Neural Networks: Academic/IndustriaVNASA/Defense,199 1, pp. 677-686. Montazer, M. and L. N. Van Wassenhove, "Analysis of scheduling rules for an FMS," International Journal of Production Research, V. 28, 1990, pp. 785-802. Montazeri, M. andL. N. Van Wassenhove, "Analysis of Scheduling Rules for FMS," International Journal of Production Research, Vo1.28, No. 4, 1990, pp. 785-802. Moser, M. and S. Engell, "Comprehensive Evaluation of Priority Rules for On-line Scheduling: The Single Machine Case," Proceedings of the 3rd International Conference on Computer Integrated Manufacturing, IEEE Computer Society Press, May 1992, pp. 393-402. Nelson, R T., R F. Sanin, and R. L. Daniels, "Scheduling with Multiple Performance Measures: the One Machine Case," Management Science, V 32, 1986, pp. 464-479. Ovacik, I. M. and R Uzsoy, "Exploiting Real-time Shop Floor Status Information to Schedule Complex Job Shops," 2nd IlE Industrial Engineering Research Conference Proceedings, 1992, pp. 868-872. Panwalkar, S. S. and Wafik Iskander, "A Survey of Scheduling Rules," Operations Research, Vol. 25, No. 1, 1977, pp. 45-61. Panwalker, S. and W. Iskander, "A Survey of Scheduling Rules," Operations Research, Vol. 25, 1977, pp. 45-6 1. Parunak, H., "Characterizing the Manufacturing Scheduling Problem," Journal of Manufacturing Systems, Vol. 10, No. 3, pp. 24 1-259. Pierreval, H., "Training a Neural Network by Simulation for Dispatching Problems," Proceedings of the 3rd International Conference on Computer Integrated Manufacturing, IEEE Computer Society Press, May 1992, pp. 332-336. Plutowski, M. and H. White, "Selecting Concise Training Sets fiom Clean Data," IEEE Transactions on Neural Networks, Vol. 4, No. 2, March 1993, pp. 305-3 18. Quinlan, J. R, "Generating Production Rules From Decision Trees," Proceddings of the 10th International Joint Conference on Artificial Intelligence, Milan, 1987. Rabelo, L. and S. Alptekin, "Synergy of Neural Networks and Expert Systems for FMS Scheduling," Proceedings of the Third ORSAJTIMS Conference on Flexible Manufacturing Systems: Operations Research Models andApplications, Cambridge, Massachusetts, Elsevier Science Publishers B. V., 1989, pp. 36 1-366. Rabelo, L. and X. Avula, "Hierarchical Neurocontroller Architecture for Intelligent Robotic Manipulation," IEEE Control Systems Magazine, April 1992, pp. 37-41. Rabelo, L., A Hybrid Artrflcial Neural Networks and Knowledge-Based Expert Systems Approach to Flexible Manufacturing System Scheduling, Ph.D. Dissertation, University of Missouri-Rolla, 1990. Rabelo, L., A. Jones, and J. Tsai "Using Hybrid systems for FMS Scheduling," 1993 IIE Industrial Engineering Research Conference(IERC), 1993. 90 Rabelo, L., and S. Alptekin, "A Hybrid Approach to FMS Scheduling Using Neural and Symbolic Processing," Proceedings of Joint USIGerman Conference on New Directions for Operations Research in Manufacturing. Rabelo, L., and S. Alptekin, "A Hybrid Neural and Symbolic Processing Approach to Flexible Manufacturing Systems Scheduling," Intelligent Hybrid Systems, Abraham Kandel and Gideon Langholz (Editors), The CRC Press, June 1992. Rabelo, L., Y. Yih, A. Jones, and G. Witzgall, "Intelligent FMS Scheduling Using Modular Neural Networks," Proceedings of the 1993 International Conference on Neural Networks, San Francisco, California, 1993. Rabelo, L., Y. Yih, A. Jones, and J. Tsai, "Intelligent Scheduling for Flexible Manufacturing Systems," 1993 IEEE International Conference on Robotics and Automation, 1993. Raman, N., F. Talbot, and R. Rachamadugu, "Simultaneous Scheduling of Machines and Material Handling Devices in Automated Manufacturing," Proceedings of the Second ORSNTIMS Conference on Flexible Manufacturing Systems, K. Stecke and R. Suri (Eds.), Elsevier, 1986, pp. 455-465. Rochette, R and R. P. Sadowski, "A Statistical Comparison of the Performance of Simple Dispatching Rules for a Particular Set of Job Shops," International Journal of Production Research, Vol. 14, No. 1, 1976, pp. 63-75. Rodammer, F. A. and K. P. White Jr., "A recent survey of production scheduling, "IEEE Transactions on Systems, Man, and Cybernetics, V. 18, 1988, pp. 841-85 1. Rumelhart, D., J. McClelland, and the PDP Research Group, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. I: Foundations, Cambridge, Massachusetts, MIT Press, 1986. Saleh, A., Real-Time Control of a Flexible Manufacturing Cell, Ph.D. Dissertation, Lehigh University, 1988. Sen, T. and S. K. Gupta, "A Branch and Bound Procedure to Solve a Bicriterion Scheduling Problem," IIE Transactions, V. 15, 1983, pp. 84-88. Shaw, M. J., "A Pattern-directed Approach to Flexible Manufacturing: A Famework for Intelligent Scheduling, Learning, and Control", The International Journal of Flexible Manufacturing Systems, Vol. 2, 1989, pp. 12 1- 144. Shaw, M. J., "A Pattern-directed Approach to FMS Scheduling", Anals of Operations Research, 15, 1988, pp. 353-376. Shaw, M. J., "An Artificial Intelligence Approach to the Scheduling of Flexible Manufacturing Systems", IIE Transactions, Vol21, No. 2, June 1989, pp 170- 183. Shaw, M. J., S. Park, and N. Raman, "Intelligent Scheduling with Machine Learning Capabilities: The Induction of Scheduling Knowledge," IIE Transactions, Vol. 24, No. 2, May 1992, pp. 156-168. Sidney, J. B., "Optimal single machine scheduling with earliness and tardiness penalties," Operations Resaeach, V. 25, 1977, pp. 62-69. Smith, M., R Ramesh, R Dudek, and E. Blair, "Characteristics of U.S. Flexible Manufacturing Systems -- A Survey," Proceedings of the Second ORSNTIMS Conference on Flexible Manufacturing Systems, K Stecke and R. Suri (Eds.), 1986, pp. 477-486. 91 99. Srikar, B. N. and S. Ghosh., "A MILP Model for the n-job, m-stage Flowshop with Sequence Dependent Setup Times," International Journal of Production Research, V. 24, 1986, pp. 1459-1474. 100. Starkweather, T., D. Wbitley, K. Mathias, and S. McDaniel, "Sequence Scheduling with Genetic Algorithms," Proceedings of the US/German Conference on New Directions for OR in Manufacturing, 1992, pp. 130- 148. 101. Storer, R H., S. Wu, and R. Vaccari, "Local Search in Problem and Heuristic Space for Job Shop Scheduling Genetic Algorithms," Proceedings of the USIGerman Conference on New Directions for OR in Manufacturing, 1992, pp. 150- 160. 102. Suddarth, S. C., The Symbolic-Neural Method for Creating Models and Control Behaviors From Examples, Ph. D. Dissertation, University of Washington, 1988. 103. Tang, C., "A Job Scheduling Model for a Flexible Manufacturing machine," Working Paper 2/85, Yale IJniversity, 1985. 104. Thesen, A., Y. Yih, and L. Lei, "Knowledge Acquisition Methods for Export Scheduling Systems", Proceedings of the 1987 Winter Simulation Conference, pp. 709-714. 105. Towell, G. and J. W. Shavlik, "Interpretion of Artficial Neural Networks: Mapping Knowledge-Based Neural Networks into Rules," Advances in Neural Information Processing Systems, V. 4, edited by John E. Moody, Steven J. Hanson and Richard P. Lippmann, Morgan Kaufinam Publishers Inc., 1992. 106. Tsujimura, Y., S. H. Park, I. S. Chang, and M. Gen, "An Effective Method for Solving Flow Shop Scheduling Problems with Fuzzy Processing Times," Computer and Industrial Engineering, V. 25, No. 1-4, pp 239-242, 1993. 107. Vaithyanathan, S. and J. P Ignizio, "A Stochastic Neural Network for Resource Constrained Scheduling," Computers Operations Research, Vol. 19, No. 314, 1992, pp. 241-254. 108. Van Dyke Parunak, H., "Characterizing the Manufacturing Scheduling Problem," Journal of Manufacturing Systems, Vol. 10, No. 3, pp.24 1-259. 109. Van Vliet, M. and L. N. Van Wassenhove, "Operational Research Techniques for Analysing Flexible Manufacturing Systems," Edited by A. Shahani and R. Stanton, Tutorial Papers in Operational Research, 1989. 110. Viviers, F., "A Decision Support System for Job Shop Scheduling," European Journal of Operational Research, V. 14, 1983, pp. 95- 103. 111. Walburn, D. H. and E. T. Powner, "A knowledge based approach to FMS scheduling," IEE UK IT 90, pp. 24-3 1, 1990. 112. Wang, L., and J. M. Mendel, "Generating Fuzzy Rules by Learning from Examples," IEEE Transactions on Systems, Man, and Cybernetics, V. 22, No. 6, NovIDec 1992, pp. 1414-1427. 113. Wang, K., H. Hsia, and Z. Zhuang, "An Intelligent Decision System for a Modem Manufacturing System," International Journal of Computer Integrated Manufacturing, Vo1.6, No. 5, 1993, pp. 281-292. 114. Wasserman, P. D., AdVarlced Methods in Neural , Van Nostrand Reinhold, New York, 1993. 115. Werbos, P. J., "Backpropagation Through Time: What It Does and How to Do It," Proceedings of the IEEE, Vol. 78, No. 10, October 1990. 1 16. Werbos, P. J., "Neural Networks for Control and System Identification," In IEEE Conference on Decision and Control (Florida), IEEE, New York, 1989. 117. Werbos, P. J., "Neurocontrol and Supervised Learning: An Overview and Evaluation," Handbook of Intelligent Control: Neural Fuzzy, and Adaptive Approaches, edited by D.A. White and D.A. Sofge, Va n Nostrand Reinhold Publication. 118. White, D. A. and D. A. Sofge, "Neural Network Based Control for Composite Manufacturing," Intelligent Processing of Materials, ASME Publications, New York, November 1990. 1 19. Williams, R. J., "Toward a Theory of Reinforcement-Learning Connectionist Systems," Technical Report NU-CCS-88-3, July 1988. 120. Wu, S. D., An Expert System Approach for the Control and Scheduling of Flexible Manufacturing Cells, Ph.D. Dissertation, The Pennsylvania State University, 1987. 121. Yih, Y., "Trace-Driven Knowledge Acquisition (TDKA) for Rule-Based Real-Time Scheduling Systems," Journal of Intelligent Manufacturing, 1, 4, 1990, pp. 217-230. 122. Yih, Y.and A. Jones, "Candidate Rule Selection to Develop Intelligent Scheduling Aits for Flexible Manufaclxring Systems (FMS)," Proceedings of the Second USIGERMAN Conference on New Directions for Operations Research in Manufacturing, 1993. 123. Yih, Y., "Learning Real-time Scheduling Rules from Optimal Policy of Semi- Markov Decision," International Journal of Computer Integrated Manufacturing, Vo1.5, No. 3, 1992, pp. 171-181. 124. Zhou, D., V. Cherkassky, T. Baldwin, and D. Hong, "Scaling Neural Network for Job Shop Scheduling," Proceedings of the International Conference on Neural Networks, Vol. 3, 1990, pp. 889-894. Appendix A

Program Lists

/* Program Name: nnsch2.c / * /* Developed by: XIAOQIANG YAO June 1993 /*------/ * /* Description: This progrram performs the Neural-Network-related task / * of decision making for next job to be processed in the / * One Machine Simulator. / * / * The neural network constructed here has 16 inputs and outputs, The /* number of hidden units will be read in from the NN*.wgt files. / * /* The scheduling rules used here (in its order) are: / * SPT, LPT, FIFO, LIFO, SST, LST, SPST, LPST, EDD, LDD, / * mSLACK, MSLACK, CR, SSLACK, and SLACK/RT. / * /* The performance criteria used are: / * MAX-FLOW-TIME, MEAN-FLOW-TIME, MAX-TARDINESS, / * MEAN-TARDINESS, WIP-INVENTORY, MACHINE-UTILIZATION, / * and THROUGHPUT......

#define MAX 12 #define RULE 15 #define AB-IN 16 #define sq (x) X*X

/** Function prototypes **/ void get- %info (void); void assign-data (void); struct measure schedule(int qq); int sort-spt (void); int sort-lpt (void); int sort-f ifo (void); int sort- lifo(void) ; int sort-sst (void); int sort-1st (void); int sort-spst (void); int sort-lpst (void); int sort-edd (void); int sort-ldd (void); int sort-min-slack (void); int sort-max-slack (void); int sort-cr (void); int sort-sslack (void); int sort-slack-rtm(void); void change-order (int i) ; struct ab get-ab (int qq, irlt m); void prepare-data(int mm); void put-input (void); void put -original-q (void); void put-scheduled-q (void); void read-maxmin (int m); void read-nets (int mx) ; int orm-nn (void); int decide-j ob (int nx) ; void save-out (int q-length: ; void get-result (void); struct measure schedule(int qqq); void save-rule (int rrid) ; void initial-cumu (void); void cumulating(void) ; void save-cumu(int nnn, int rrid) ; void avg-cumu ( int nnn) ; void put-result-s (int net-id, int rrid, int nnn) ; void save-ql (void); void assign-ga (void);

/** Global Variables and custom data types **/

int queuel, start-ct, start:-preptype, queue, Hidden; int -jb type [MAX], j number [MAX] , a- time [MAX] , prs-time [MAX] ,du-date [MAX]; int type [MAXI , jobInumber [MAX], at [MAXI , Pt [MAXI , due-d [MAX]; float nn-in [AB-IN+~], maxals [AB-IN] , minab [AB-IN] , nn-out [RULE]; float wgt-hi [I001 [AE-IN+l] , wgt-oh [RULE] [loll ; int best, better, good, COlnmOn; float avg-cval [RULE+l][el ;

struct job{ int order; int number; int type; int arrival-time; int process-time; int due-time; int current-time; int setup-t ime; int flow-time; int tardiness; } sch [MAXI ; typedef struct measure{ long max-ft; float mean-ft; long max-td; float mean-td; float mach-use; float wip-avg; float thru-put; long tardy-job; int que [MAXI ;

struct measure results [RULE+lI; struct measure cum-results[RULE+l]; int sup_time[81 [El ={{0,0,0,0,0,O,O,O}, {0,0,1,2,2,3,2,2}, {0,1,0,2,3,4,3,2}, {0,2,2,0,3,4,3,2}, {0,1,2,2,0,4,4,2}, {0,1,2,2,3,0,3,2}, {0,1,2,2,3,3,0,3}, {0,1,2,2,2,2,2,0}}; char *name1 [RULE]= { { "SPT") , { IILPTII} , {"FIFO~~j , {"LIFO"1, {"ssT"), {w~~~ll},

{ " SPST1''p ,

{ I1LPSTI1:) , {IIEDD~~}, {"LDD~~},. { NmSLACI

FILE *fptr, *fpt; / /* Function definitions /*------* / main (void) int exp-id, rl-id, nn-rlid; int Num, n; FILE *Kga; begin-graph ( ) ; while (1){ if ( (fpt = fopen ("result.txt", w) ) ==NULL) { printf("Error open file \nW); exit 1; } setviewport (0,0, getmaxx 0 , getmaxy 0 ,I); clearviewport () ; put-sch-rule0 ; put-legend ( ) ; rl- id = get-sch-rule0 ; Num = input-sample-num ( :I ; put-measure () ; /* put control menu */ Select performance measure exp- id = get-measure 0 ; /* */

if((Kga = fopen("perforrn.txt",w))==NULL){ printf("Error open file \nU); exit 1; 1 fprintf(Kga,"%dM, exp-id) ; fclose (Kga);

read maxmin (exp-id) ; /* Get maximum and minimum values */ read-nets- (exp-id) ; /* Get NN weights etc. */ if (rl-id == 1) put-net(AB-IN, Hidden, RULE, exp-id); if ( (fptr = fopen("sch.dat", "r")) ==NULL){ printf("\n\nError open file ! !\n\nU); end-graph ( ) ; exit 1;

J else { n = 0; init ial-cumu () ; do I get- q-inf o ( ) ; get-result () ; put-original-q () ;

switch (rl-id) { case 1: put-output (RULE); delay (500); prepare- data (exp-id) ; put-input () ; nn rlid = perform-nn0; highlight-output (best, better, good, common) ; results [l51 =schedule (decide-job (nn-rlid) ) ; break;

case 2: save-ql ( ) ; put-net(AB-IN, Hidden, RULE, exp-id); put-output (RULE); prepare-data (exp-id) ; put-input () ; nn-rlid = perform-nn0 ; highlight-output(best, better, good, common) ; delay (1500); GAO ; assign-ga () ; results 1151 =schedule (queue); break ; 1 setviewport (0,0,getmaxx 0 , getmaxy 0 , 1) ; put-scheduled-q () ; cumulating ( ) ; save-rule (rl-id) ; delay (1000); n++; )while (n

fclose (fptr); fclose (fpt); put- result-s(exp-id, rl-id, n); 1 J /*------*/ void get- q-info (void) I int i, k, x = 0; int dummy=l;

for (i=l; i<=queuel; i++){ j-number [i] = i; fscanf (fptr, "%d %d\nU, &jb-type [il , &a-time [il ) ; fscanf (fptr, "%d %d\nU, &prs-time [i], &du-date [i]) ; 1 while (dummy!=EOF) { fscanf(fptr,"%dn, &dummy) ; if (dummy == EOF) break; else{ k = queuel + 1; j-number [kl = k; jb-type[kl = dummy; fscanf(fptr,"%d\nw, &a-time[k]); fscanf (fptr, "%d %d\nV, &prs-time [kl , &du-date [kl ) ; x++; 1 1 queue = queuel + x; 1 /*------*/ void save-ql(void) { int i; FILE *ffql;

remove ( "queuel . txt " ) ; if((ffq1 = fopen(~queuel.txtu,w))==NULL) { printf("Error open file

for (i=l; i<=queue; i++) { fprintf (ffql, "%d %d\nU , jb-type [i] , a-t ime [il ) ; fprintf (ffql, "%d %d\nU, prs-time [i] , du-date [il ) ; 1 fclose (ffql); 1 /*------* / void assign-ga (void)

int i, sque [201 ; FILE *ffga;

if ( (ffga = fopen ("gaoutl.txt", "r") ) ==NULL){ print-("Error open file \n"); exit 1; 1 for (i=l; i<=queue; i++){ fscanf (ffga,"%dU, &sque [il ) ; job- nuher [i]= j-number [sque [il I ; type [i] = jb-type [sque [ i 1 1 ; at[i] = a- time[sque[ill; pt [i] = prs-t ime [sque[il 1 ; due-d[il = du-date[sque[il I ; 1 fclose (ffga); 1 /*------* / void prepare-data (int mm) int i, a, m; struct ab tmp-ab; float tmp-max [AB- IN] , tmp-min [AB-IN] ; m = mm; for (i=O; i

assign-data () ; a = sort-edd 0 ; tmp-ab = get-ab (a, m) ; nn-in [9] = tmp-ab. a; nn- in [lo] = tmp-ab.b;

assign-data () ; a = sort-ldd () ; tmp-ab = get-ab (a, m) ; nn-in [lll = tmp-ab.a; nn-in [l21 = tmp-ab.b; assign-data () ; a = sort-sst 0 ; tmp-ab = get-ab (a, m) ; nn- in[13] = tmp-ab.a; nn_in[l4] = tmp-ab.b; assign-data 0 ; a = sort-1st 0 ; tmp-ab = get-ab (a, m) ; nn-in [ 151 = tmp-ab . a; nn-in [l6] = tmp-ab .b; for (i=0; imaxab [il ) tmp-max [i] = nn-in [i+l] ; nn-in[i+l] = (nn-in[i+l] - tmp-min[i] )/(tmp-max[i] - tmp-min[il) ; 1 nn- in[O] = 1.0; } /*------*/ void read-maxmin (int mx) I int i; FILE "fptmaxl, *fptmax2, *fptmax3; switch(rnx) { case 0: fptmax3 = fopen("maxmin3.allv,"r"); for(i=O; i<16; i++) fscanf (fptmax3,"%fN, &maxab [il ) ; for(i=O; i<16; i++) fscanf (fptmax3,If%£ ", &minab [i]) ; fclose (fptmax3); break; case 1: case 2: case 3: case 4: fptmaxl = fopen("maxminl.all","r") ; for(i=O; i<16; i++) fscanf(fptmaxl,"%f", &maxab [ iI ) ; for(i=O; i<16; i++) fscanf ( fptmaxl , ll%fv,&minab [il ) ; fclose (fptmaxl); break; case 5: case 6: fptmax2 = f0pen(~maxmin2.all","r"); for(i=O; i<16; i++) fscanf (fptmax2,I1%f ", &maxab [il ) ; for(i=O; i<16; i++) fscanf (fptmax2,I1%f If, &minab [il ) ; fclose (fptmax2); break; default : printf("Warning: Error in file !!\nn); exit 1; /*------* / struct ab get-ab(int qq, int mx) { int i; float bl=O, sum=O, b2=0, b3=0; struct ab abl; type [OI = start-pre-type; switch (mx){ case 0: for (i=l; i<=qq; i++){ bl += (i*(start-ct + pt [il + sup-time [type [i-11I [type[il I - at [il ) ) ; b2 += i; b3 += sq(i) ; sum += (start-ct + pt [i] + sup-time [type [i-11I [type [il I - at [i]) ; 1 break; case 1: case 2: case 3: case 4: for (i=l; i<=qq; i++){ bl += (i*(pt [il + sup-time [type [i-11I [type[i] ] ) ) ; b2 += i; b3 += sq(i); sum += (pt [il + sup-time [type[i-11 I [type [i]I ) ; 1 break; case 5: case 6: for (i=l; i<=qq; i++){ bl += (i*(sup-time [type [i-11I [type[il I ) ) ; b2 += i; b3 += sq(i); sum += (sup-time [type[i-11 I [type [il I ) ; 1 J break; default : printf(I1Warning: Error in file ! !\nN); exit 1; ab1.b = (qq*bl - sum*b2)/(1O*b3 - sq(b2)) ; ab1.a = sum/qq - (abl.b*b2)/qq; return (abl); 1 /*------* / void assign-data ( ) { int i; for (i=l; i<=queue; i++) { job-number [il = j-number [il ; type[i] = jb-typeiil; at [i] = a-time [il ; pt [i] = prs-time [il ; due- d [il = du-date [il ; 1 /*------*/ int current-time (int x) I int i, this-c-time [MAX] ; type [OI = start-pre-type; this- c-time[O] = start-ct; for (i=l; i<=x; i++){ this- c - time[i] = this-c-time [i-11 + pt [i] + sup-time [type [i-11I [type [il I ; 1 return (this-c-time [XI ) ; 1 /*------* / int sort-spt(void) { int i, a-1, signal, q=queuel; if (queue > queuel) { do { do { signal = 0; for (i=a; i<=q; i++) { if ( (pt [i] >pt [i+ll ) && (i! =q) ) { change-order (i); signal = 1; 1 1 } while (signal); if (at [queuel+l] <= current-time (a)) q = queue; else q = queuel; a++; }while (a<=q); return (q);

else { dot signal = 0; for(i=l; ipt [i+ll ) && (i!=q)) { change-order ( i ) ; signal = 1; 1 1 ) while (signal); return (q); 1 1 int sort-lpt(void) i int i, a=l, signal, q = queuel; if(queue > queuel) { do I do { signal = 0; for (i=a; i<=q; i++){ if ( (pt [il

J 1 } while (signal); if (at [queuel+ll <= current-time (a)) q = queue; else q = queuel;

return (q); 1 else { do I signal = 0; for (i=l; i

} while (signal); return (q); 1 1 /*------*/ int sort-£if00 I int i, a=l, signal, q = queuel; if (queue > queuel) { do { do { signal = 0; for (i=a; i<=q; i++){ if ((at[i]>at[i+l]) && (i!=q)){ change-order ( i) ; signal = 1; j 1 } while (signal); if(at[queuel+l] <= current-time(a)) q = queue; else q = queuel; a++; }while (a<=q); return (q); 1 else { do { signal = 0; for (i=l; iat [i+l]) && (i!=q)) { change-order (i); signal = 1; j 1 } while (signal); return (q); 1 i int sort-lif o () { int i, a=l, signal, q = queuel; if (queue > queuel) { do { do I signal = 0; for (i=a; i<=q; i++) { if ( (at [i]

;* ...... * / int sort-sst (1 t int i, a=l, signal, q = queuel, setup[MAXl, pre-type = start-pre-type; if (queue > queuel) { do{ do I signal = 0; for(i=a; i<=q; i++) setup [i] = sup-time [pre-type1 [type[il I ;

for (i=a; i<=q; i++){ if ( (setup[i] > setup [i+ll ) && (i! =q) { change-order ( i ) ; signal = 1; 1 else if ( (setup[i] == setup [i+ll ) && (job-number[i] > job-number[i+l]) && (i!=q)){ change-order (i); signal = 1; 1 1 } while (signal); pre-type = type [a1 ; if (at [queuel+ll <= current-time (a)) q = queue; else q = queuel; a++ ; } while (a<=q); return (q); 1 else { dot do l signal = 0; for(i=a; i<=q; i++) setup [i] = sup- time [pre-type1 [type[i] 1 ;

for (i=a; i<=q; i++){ if ( (setup[il > setup [i+l]) && (i! =q)) { change-order ( i ) ; signal = 1; 1 else if ( (setup[il == setup [i+l] ) && ( j ob-number [ iI > j ob-number [ i+l] ) && (i!=q)) { change-order ( i) ; signal = 1; 1

pre-type = type [a1 ; a++; ) while (a

/*------*/ int sort-1st () { int i, a=l, signal, q = queuel, setupLMAX1, pre-type = start-pre-type;

if (queue > queuel ) { do I do { signal = 0; for(i=a; i<=q; i++) setup [i] = sup-time [pre-type] [type [il I ;

for (i=a; i<=q; i++) { if ( (setup[il .c setup [i+ll ) && (i!=q)) { change-order (i);

signal = 1; ' 1 else if ( (setup[il == setup [i+ll ) && (job-number [il > job-number [i+ll) && (i!=q)){ change-order (i); signal = 1; 1 1 } while (signal); pre-type = type [a1 ; if (at[queuel+l] <= current-time (a)) q = queue; else q = queuel; a++; } while (a<=q); return (q); else { do I do { signal = 0; for(i=a; i<=q; i++) setup [il = sup-time [pre-type] [type[i] ] ; for(i=a; i<=q; i++) { if ( (setup[il < setup [i+ll ) && (i!=q) ) { change-order (i); signal = 1; 1 e 1 se if ( (setup[il == setup [i+ll) && (job-number [il > job-number [i+ll ) && (i!=q)){ change-order ( i) ; signal = 1; 1 1 } while(signa1) ; pre-type = type [a1 ; a++; } while (a

int sort-spst () { int i,a=l,signal,pre-type=start~pre~type,pst[MAXl,setup[~~~l,~=~ueue~; if (queue > queuel) { do I do I signal = 0; for (i=a; i<=q; i++){ setup [il = sup-t ime [pre-type1 [type[ill ; pst [il = pt [il + setup [il ; 1 for (i=a; i<=q; i++){ if ( (pst [i] > pst [i+ll ) && (i!=q)) { change-order ( i ) ; signal = 1; 1 else if ( (pst [il == pst [i+ll) && (job-number [il > job-number [i+ll ) && (i!=q)){ change-order ( i) ; signal = 1; 1 1 } while (signal); if(at[queuel+ll <= current-time(a)) q = queue; else q = queuel; pre-type = type [a1 ; a++; } while (a<=q); return (q); 1 else{ do { do { signal = 0; for (i=a; i<=q; i++){ setup [il = sup-time [pre-type1 [type[il I ; pst [i] = pt [il + setup [il ; 1 for (i=a; i<=q; i++) { if ( (pst [i] > pst [i+ll ) && (i!=q)) { change-order ( i) ; signal = 1; } else if ( (pst [il == pst [i+ll )

&& (job- number [il > job-number [i+l]) && (i!=q)){ change-order ( i) ; signal = 1; 1 1 } while (signal); pre-type = type [a1 ; a++; } while (a

J 1 /*------* / int sort-lpst ( ) { int i, a=l,signal,pre- type=start-pre-type,pst[MAX] ,setup[MAXl,q=queuel;

if(queue > queuel) { do { do I signal = 0; for (i=a; i<=q; i++){ setup [i] = sup- time [pre-type1 [type[ill ; pst [i] = pt [il + setup [il ;

for(i=a; i<=q; i++){ if( (pst [i] < pst [i+ll ) && (i!=q)) change-order ( i) ; signal=l;

I else if ( (pst[il == pst [i+ll ) && (job-numberril > job-number[i+ll && (i!=q)){ change-order ( i ) ; signal = 1; 1 1 ) while(signa1); if (at [queuel+ll <= current-time (a)) q = queue; else q = queuel; pre-type = type [a1 ; a++; } while (a<=q); return (q); 1 else{ do I dot signal = 0; for (i=a; i<=q; i++) { setup [il = sup- time [pre-type1 [type [il I ; pst [il = pt [il + setup [il ; 1 for (i=a; i<=q; i++) { if ( (pst [il < pst [i+l]) && (i!=q)) { change-order ( i) ; signal = 1; J else if ( (pst [il == pst [i+ll ) && (job-number [i] > job-number [i+ll ) && (i!=q)){ change-order ( i ; signal = 1; 1 1 while (signal); pre-type = type [a1 ; a++; } while (a

t /*------*/ int sort-edd () { int i, a=l, signal, q = queuel; if (queue > queuel) { do I do I signal = 0; for(i=a; i<=q; i++) if ( (due-d[il >due-d[i+11 ) && (i!=q)) { change-order ( i ) ; signal = 1; 1 } while (signal); if (at [queuel+ll <= current-tirne(a)) q = queue; else q = queuel; a++; )while (a<=q); return (q);

J else{ do { signal = 0; for (i=l; i<=q; i++) { if ( (due d [il >due-d [i+ll ) && (i!=q) { change-order ( i ) ; signal = 1;

1 } while(signa1) ; return (q); 1 1 /*------*/ int sort-ldd () I int i, a=l, signal, q = queuel; if (queue > queuel) { do I do { signal = 0; for (i=a; i<=q; i++){ if ( (due-d [i] queuel) { do I do I signal = 0; for (i=a; i<=q; i++) { finish-time[i] = curr-time + pt[i] + sup- time [pre-type1 [type [il I ; slack [i] = due-d [il - finish-time [il ;

for (i=a; i<=q; i++){ if ( (slack[i] > slack [i+ll ) && (i!=q) ) { change-order ( i) ; signal = 1; 1 else if ( (slack [i] == slack [i+ll ) && (job-number [i] > job-number [i+ll ) && (i!=q)){ change-order ( i) ; signal = 1; 1 } } while(signa1); if (at [queuel+l] <= current-time (a)) q = queue; else q = queuel; curr- time = f inish-time [a1 ; pre-type = type [a1 ; a++; } while (a<=q); return (q); 1 else{ do I do I signal = 0; for (i=a; i<=q; i++){ f inish-time [il = curr-time + pt [i] + sup-time [pre-type1 [type [il I ; slack [il = due-d [il - finish-time [i] ; 1 for (i=a; i<=q; i++) { if ( (slack [il > slack [i+ll && (i! =q)) { change-order ( i ) ; signal = 1; 1 e 1s e if ( (slack [il == slack [i+l]) && (job-numberril > job-number[i+ll) && (i!=q)){ change-order ( i ) ; signal = 1; } 1 } while (signal); curr-time = finish-time[al ; pre-type = type [a1 ; a++; } while (a queuel) { do I do { signal = 0; for(i=a; i<=q; i++){ finish- time[i] = curr-time + pt [il + sup-t ime [pre-type1 [type [il I ; slack [i] = due-d [i] - finish-time [il ; 1 for(i=a; i<=q; i++){ if ( (slack[i] < slack [i+ll ) && (i!=q) ) { change-order ( i) ; signal = 1;

J else if ( (slack [i] == slack [i+ll ) && (job- number [il > job-number [i+ll && (i!=q)){ change-order (i) ; signal = 1;

} while (signal); if (at [queuel+ll <= current-time (a)) q = queue; else q = queuel; curr- time = finish-time[al; pre-type = type [a1 ; a++; } while (a<=q); return (q); 1 else { do { do { signal = 0; for(i=a; i<=q; i++){ finish-time [i] = curr-time + pt [il + sup-time [pre-type1 [type [il I ; slack [i] = due-d [i] - f inish-time [il ;

for (i=a; i<=q; i++){ if ( (slack[il < slack [i+ll ) && (i!=q)) { change-order ( i ) ; signal = 1;

J else if ( (slack [il == slack [i+ll ) && ( job-number [ il > job-number [i+ll ) && (i!=q)){ change-order(i); signal = 1;

I 1 } while(signa1) ; curr- time = f inish-time [a1 ; pre-type = type [a1 ; a++; } while (a queuel) { do I do { signal = 0; for(i=a; i<=q; i++) cr[i] = (float)(due-d[il - curr-time)/(float) (pt [il + sup- time [pre-type1 [type[il I ) ;

for (i=a; i<=q; i++){ if ( (cr[i] > cr [i+l]) && (i!=q) ) { change-order (i); signal = 1; 1 else if ( (cr[i] == cr [i+ll ) && (job-number[i] > job-nurnber[i+ll) && (i!=q)){ change-order ( i) ; signal = 1; i 1 } while(signa1); if (at [queuel+ll <= current-t ime (a)) q = queue; else q = queuel; curr-time += pt [a] + sup-time [pre-type1 [type[a1 I ; pre-type = type [a1 ; a++; } while (a<=q); return (q); 1 else { do I do { signal = 0; for(i=a; i<=q; i++) cr[i] =(float) (due-d[il - curr time)/(float) (pt[il + sup- time [pre-type1 [type [il I ) ;

for(i=a; i<=q; i++) { if( (cr [i] > cr [i+ll) && (i!=q)){ change-order ( i) ; signal = 1; 1 J else if ( (cr[i] == cr [i+ll ) && (job-number[i] > job-number[i+ll) && (i!=q)){ change-order ( i ) ; signal = 1; 1 ) while (signal); curr-time += pt [a1 + sup-time [pre-type1 [type [a1 I ; pre-type = type [a1 ; a++; ) while (a

int sort-sslack ( ) { int i, a=l, signal, q = queuel, sub-time [MAXI , sslack [MAXI ; int pre-type = start-preptype; if ( queue > queuel ) { do { do { signal = 0; for(i=a; i<=q; i++) { sub- time [il = at [il + pt [il + sup-t ime [pre-type1 [type[ il I ; sslack [i] = due-d [il - sub-time [il ; } for (i=a; i<=q; i++) { if ( (sslack [i] > sslack [i+ll ) && (i!=q) ) { change-order ( i) ; signal = 1; 1 else if ( (sslack[i] == sslack [i+ll ) && (job-number [i] > job-number [i+ll ) && (i!=q)){ change-order (i); signal = 1; 1 } ) while (signal); if (at [queuel+l] <= current-time (a)) q = queue; else q = queuel; pre- type = type [a1 ; a++; ) while (a<=q); return (q); } else{ do I do { signal = 0; for (i=a; i<=q; i++) { sub- time[il = at [il + pt [il + sup- time [pre-type1 [type[il I ; sslack [i] = due-d [il - sub-time [il ; for (i=a; i<=q; i++){ if ( (sslack [il > sslack [i+ll ) && (i!=q)) { change-order (i); signal = 1; 1 else if ( (sslack[il == sslack [i+ll ) && (job-number [il > job-number [i+ll) && (i!=q)){ change-order (i); signal = 1;

1 } while (signal); pre-type = type [a1 ; a++; } while(a

if(queue > queuel) { do i do { signal = 0; for (i=a; i<=q; i++){ sub- time [i] = curr-time + pt [il + sup- time [pre-type1 [type [il I ; if (curr-time ! = due-d [iI ) slack-ratio [i] = ( (float)(due-d [il - sub-time[i]))/((float) (due-d[il - curr-t ime) ) ; else slack-ratio [il = 9999.9; 1 for (i=a; i slack-ratio [i+ll ) { change-order ( i ) ; signal = 1; i else if ( (slack ratio [il == slack-ratio [i+ll ) && (job-numberri] > job-number[i+l] )) { change-order ( i) ; signal = 1; 1 } } while(signa1); if ( (at [queuel+ll <= current-time (a)) 1 I (at[queuel+l] <= current-time (queuel)) ) q = queue; else q = queuel; curr-t ime = sub-t ime [a1 ; pre- type = type [a1 ; a++; } while (a

for (i=a; i slack-ratio [i+ll ) { change-order (i); signal = 1; 1 else if ( (slack-ratio [i] == slack-ratio [i+ll ) && (job-number[il > job-number[i+ll)) { change-order (i); signal = 1; }

} while(signa1); curr-t irne = sub-t irne [a1 ; pre-type = type [a1 ; a++; } while (a best-rule [rl ) && (k!=r- id[O] && k!=r- id[l] && k!=r_id[2] && k!=r_id[31 )) { best-rule [rl = nn-out [kl ; best = r-id [OI ; better = r-id[ll ; good = r-id [21 ; common = r-id [ 3I ; save-out (decide-job (best)) ; return (best); 1 /*-_------*/ int decide-job (int nx)

I int tmp; assign-data () ; switch (nx){ case 0: trnp = sort-spt () ; break ; case 1: trnp = sort-lpt () ; break; case 2: trnp = sort-fifo0 ; break; case 3: trnp = sort-lifo 0 ; break ; case 4: trnp = sort-sst 0 ; break ; case 5: trnp = sort-1st () ; break ; case 6: trnp = sort-spst (1 ; break; case 7: trnp = sort-lpst (1 ; break ; case 8: trnp = sort-edd0; break ; case 9: trnp = sort-ldd () ; break ; case 10: trnp = sort-min-slack () ; break; case 11: trnp = sort-max-slack () ; break ; case 12: trnp = sort-cr () ; break; case 13: trnp = sort-sslack (1 ; break; case 14: trnp = sort-slack-rtm ( ) ; break; default : break : 1 return (tmp);

void save-out ( int q-length) remove ("gain.txt " ) ; fptga = fopen("gain.txtW, "wU);

fprintf (fptga, "%d ", job-number [i]-1) ; fprintf (fptga, "\nW); assign-data () ; decide-job (better); for (i=l; i<=q-length; i++) fprintf (fptga, "%d ", job-number [i]-1) ; fprintf (fptga, "\nu); assign-data () ; decide-job (good); for (i=l; i<=q-length; i++) fprintf (fptga, I1%d If, job-number [i]-1) ; fprintf (fptga, "\nu); assign-data () ; decide-job (common); for(i=l; i<=q-length; i++) fprintf (fptga, "%d " , job-number [i]-1) ;

void read-nets (int mx) i int i, j, k; FILE *fnl, *fn2, *fn3, *fn4, *fn5, *fn6, *fn7; switch (mx){ case 0: /* MAX FLOW TIME */ fnl = fopen("nnmxft.wgt","r"); fscanf (fnl,"%d" , &Hidden) ; for(i=O; i<=AB-IN; i++) for (j=l; j<=Hidden; j++) fscanf(fnl,"%f", &wgt-hi[jl [il); for (j=0; j<=Hidden; j++) for(k=O; k

/*------* / void put-original-q (void) { int i; char s-ptp [301 , s-qsz [301 , s-sct [601 ; char s-at [601, s_nm[301, s-Pt [601, s-dd[601 ; const a=215, b=345; static int xa=O, xb=O, xc=O;

sprintf (s-qsz, "%d", xa); sprintf (s-ptp, "%dU, xb) ; sprintf(s-sct, "%d", XC); setcolor (7); settextstyle (0,0,l);

outtextxy(l20, 340, s-qsz) ; outtextxy (135, 390, s-sct) ; outtextxy(l45, 365, s-ptp) ; sprintf (s-qsz, "%dn, queue) ; sprintf (s-ptp, "%dU, start-pre-type) ; sprintf (s-sct, "%dM, start-ct) ; xa=queue; xb=start-pre-type; xc=start-ct; setcolor (1); settextstyle (0,0,l); outtextxy(l20, 340, s-qsz); outtextxy(l35, 390, s-sct); setcolor(type~color(start~pre~type)) ; outtextxy(l45, 365, s-ptp) ; setcolor (RED); outtextxy(205+165, 320+7, "Original Queue"); setcolor (8) ; outtextxy (205+160, 320+ (getmaxyo -1-320)/2+7, llScheduledQueue") ; for (i=l; i<=queue; i++){ block(a+ (i-1)*33, 425, a+30+ (i-1)*33, 425+41, 7) ; block(a+ (i-1)*33, 425, a+30+ (i-1)*33, 425+41, 7) ; block(a+(i-1)*33, b, a+30+(i-1)*33, b+41, 7) ; block(a+(i-1)*33, b, a+30+(i-1)*33, b+41, 7);

J delay (100); for (i=l; i<=queue; i++){ sprintf (s-nm, "%dn,j-number[il) ; sprintf (s-at , "%dU, a-time[il ; sprintf (s-pt, "%d", prs-t ime 1 il ) ; sprintf (s-dd, "%dn,du-date [il ) ;

setcolor (0); settextstyle (2,0,4); outtextxy(a+l+(i-1) *33, b+l, s-nm) ; outtextxy(a+l+ (i-1)*33, b+ll, s-at) ; outtextxy (a+l+(i-1) *33, b+21, s-pt) ; outtextxy(a+l+(i-1) *33, b+31, s-dd) ; 1 1 /*------* / void put-scheduled-q(void) I int i; char s-ptp [301 , s-qsz [301 , s-sct [601 ; char s-at [601 , s_nm[301 , s-pt [601 , s-dd [601 ; const a=215, b=425; setcolor (8); settextstyle (0,0,l); outtextxy(205+165, 320+7, "Original Queuev); setcolor (RED); outtextxy (205+160, 320+ (getmaxy0 -1-320)/2+7, "Scheduled Queue") ; for(i=l; i<=queue; i++){ sprintf (s-nm, "%dlf, job-numberiil) ; sprintf (s-at , l1 %dn, at [il); sprintf (s-pt, "%dlr, pt [il); sprintf ( s-dd , I! %dN, due-d[il) ;

block (a+(i-1) *33, b, a+30+ (i-1)*33, b+41, type-color (type[i] ) ) ; setcolor (0); settextstyle (2,0,4); outtextxy(a+l+ (i-1)*33, b+l, s-nm) ; outtextxy (a+l+(i-1) *33, b+ll, s-at) ; outtextxy(a+l+(i-1)*33, b+21, s-pt) ; outtextxy (a+l+(i-1) *33, b+31, s-dd) ; 1 1 /*------* / void get-result(void) { int qqq; assign-data () ; qqq=sort-spt 0 ; results [OI = schedule (qqq); assign-data 0 ; qqq=sort-lpt () ; results [ll = schedule (qqq); assign-data ( ) ; qqq=sort-f ifo () ; results [21= schedule (qqq); assign-data () ; qqq=sort-lifo 0 ; results [31= schedule (qqq); assign-data ( ) ; qqq=sort-sst 0 ; results [41= schedule (qqq); assign-data ( ) ; qqq=sort-1st (1 ; results [51= schedule (qqq); assign-data () ; qqq=sort-spst ( ) ; results [61= schedule (qqq);

assign-data () ; qqq=sort-lpst ( ) ; results [71= schedule (qqq);

assign-data 0 ; qqq=sort-edd ( ) ; results [81= schedule (qqq);

assign-data () ; qqq=sort-ldd () ; results [91= schedule(qqq);

assign-data () ; qqq=sort-min-slack ( ) ; results [lo1 = schedule (qqq);

assign-data () ; qqq=sort-max-slack () ; results [Ill = schedule (qqq);

assign-data () ; qqq=sort-cr () ; results [l2]= schedule (qqq);

assign-data 0 ; qqq=sort-sslack ( ) ; results [l3]= schedule (qqq);

assign-data () ; qqq=sort-slack-rtm ( ) ; results [l41= schedule (qqq);

struct measure schedule(int qqq) i int i, tm-total; long ft-total=O, td-total=O, wip-sub=O, res-sub=O; struct measure temp-sch;

temp-sch.max-it = sch[ll .flow-time; temp-sch-max-td = sch[ll .tardiness; sch [Ol . type = start-pre-type; sch [OI . current-time = start-ct;

for (i=l; i<=qqq; i++){ schril .order = i; sch[il .number = job-number[il ; temp-sch.que [il = sch[il .number; sch [il .type = type [il ; sch [i]. arrival-time = at [i] ; sch [il .process-time = pt [il ; sch[i] .due-time = due-d[i] ; sch [il . setup-time = sup-time [sch[i-11 . type1 [sch[i] .type] ; sch [il . current-time = sch [i-11 . current-time + sch [i-11 .process-time + sch [i-11. setup-time; sch [il . f low-t ime = sch [ il . current-t ime + sch [i] .process- time + sch [il .setup-time - sch [i] . arrival-time; sch [il .tardiness = sch [il .current-time + sch [i] .process- time + sch[il .setup-time - sch [i].due-time;

ft-total += (long)sch [il .flow-time; td-total += (long)sch [il .tardiness;

wip- sub += (long)( (sch[il .process-time+sch [i] .setup- time) * (qqq+l-i)) ; res-sub += (long)sch [il .process-t ime; } tm-total = (sch[qqql . current-time + sch [qqql .process-time + sch [qqql . setup-time) - start-ct; temp- sch.mean-it = (f1oat)ft-total/(float)qqq; temp- sch.mean-td = (f1oat)td-total/(float)qqq; temp-sch.wip-avg = (f1oat)wip-sub/(float)tm_total; temp-sch.mach-use = (f1oat)res-sub/(float)tm-total; temp-sch.thru-put = (float)qqq/(float)tm_total; return (temp-sch) ; 1 /*------void save-rule (int rrid) i int i,j ; static int nn=l; fprintf(fpt, "SCHEDULING PROBLEM # %d:\nU, nn) ; for(i=O; i<78; i++) fprintf(fpt,"%cU,95) ; fprintf(fpt, "\nSCHD Max Mean Max Mean WIP Mach Thru Td\nM); fprintf(fpt, "RULE ft ft td td avg use put Job Job Sequence\nU); for(i=O; i<78; i++) fprintf(fpt,"%cU,45) ; fprintf (fpt,"\nW) ; for (i=O; i

for (j=l; j<=queue; j ++)

fprintf (fpt,"%2d !I, results [i] .que[jl ) ; fprintf (fpt,"\nu) ; 1 i=15; switch (rrid){ case 1:

fprintf(fpt, "NN %31d %5.lf %31d %5.lf %5.3f %5.3f %5.3£ %21d 11, results [i] .max-ft, results [i] .mean-ft,results [il .max-td, results [i] .mean-td, results [i].wip-avg, results [i] .rnach-use, results [i]. thru-put, results [i]. tardy-job) ; break;

case 2: fprintf(fpt, "GA %31d %5.lf %31d %5.lf %5.3f %5.3f %5.3£ %21d ", results [il .max-ft, results [il .mean-ft,results [i] .max-td, results [il .mean-td, results [il .wip-avg, results [i] .math-use, results [il .thru-put, results [i]. tardy-job) ; break; 1 for(j=l; j<=queue; j++)

fprintf (fpt,"%2d If, results [il .que [jl ) ; fprintf (fpt,"\n") ;

for(i=O; i<78; i++) fprintf(fpt,"%c",95) ; fprintf (fpt, "\n\n\nW) ; /*------J * / void save-cumu(int nnn, int rrid) i int i,j; fprintf(fpt,"\n CUMULATIVE RESULTS\~\~"); fprintf(fpt,"Total number of jobs finished: %d\nW, nnn*lO) ; for(i=O; i<78; i++) fprintf (fpt,"%cU ,95) ; fprintf(fpt, "\nSCHD Max Mean Max Mean WIP Mach Thru Td\nN); fprintf (fpt, "RULE ft ft td td avg use put Job \n"); for(i=O; i<78; i++) fprintf (fpt,"%cU, 45) ; fprintf (fpt,"\nu) ;

for (i=O; i

for(j=O; j<8; j++) fprintf (fpt,"%8.3f ", avg-cval [i] [j]) ; fprintf (fpt,"\nu) ; 1 for(i=O; i<78; i++) fprintf (fpt, ll%clr,95) ; void initial-cumu(void) t int i; for (i=O; i

block (0, 0, getmaxx 0 , getmaxy 0 , 15) ; box(1, 1, getmaxx0-1, getmaxy0-1, BLUE); block (xl, yl, x2, y2, 7) ; block(x2, y1+18, x2+15, y2+18, 8) ; block (xl+l5, y2, x2, y2+18, 8) ; box(xl+4, y1+4, ~2-5,y2-5, 8); box(xl+5, y1+5, ~2-4,y2-4, 15); settextstyle(O,O,l); box(235, 55, 235+152+5, 71, 0); setcolor (BLUE); switch (net-id) { case 0: outtextxy(240, 60, Maximum Flow Time"); break; case 1: outtextxy(240, 60, " Mean Flow Time"); break; case 2: outtextxy(240, 60, " Maximum Tardiness"); break; case 3: outtextxy(240, 60, " Mean Tardiness"); break; case 4: outtextxy (240, 60, " WIP Inventory"); break; case 5: outtextxy(240, 60, "Machine Utilization"); break; case 6: outtextxy(240, 60, " Throughput " ) ; break; I for (i=O; i

sprintf (strl,"%3dM, nnn) ; outtextxy(x1+60+248, y1+80+17*15, strl); sprintf(strl,"%3dN,jobnum); outtextxy(x1+60+248, y1+80+18*15, strl); setcolor (0); outtextxy (xl+46, y2 -39, "PRESS [ESC] TO EXIT, [ENTER] TO REPEAT"); setcolor (RED); outtextxy (xl+45, y2 -40, "PRESS [ESCI TO EXIT, [ENTER] TO REPEAT"); do { xyz = getch0; if (xyz==27){ end-graph () ; exit (0);

else if (xyz==13) f lag=l; }while (flag==O) ; 1 /** End of program nnsch2.c **/ / * * mult-0ut.c * * Windows DLL Calculates outputs for performance measures MFT and MTD * by 9 different dispatching rules * SPT, LPT, FIFO, EDD, LWR, DSLACK, SLACK/RO, CR, LNOR * /

#define FALSE 0 #define TRUE 1 #define MAXINT 16000 #define MAXRULE 9

#define INPUT-FILE " jobdata.tmp" #define MFT-OUT "mft .outI1 #define MTD-OUT "mtd.out"

/** type definitions **/ typedef struct part{ int no-op ; /* number of operations */ int p_plan[25] [4]; /* 1 - operation # */ /* 2 - machine # */ /* 3 - process time */ int r-t ime ; /* ready time */ int d-date; /* due date */ int f-t ime ; /* flow time */ int tard; /* tardiness */ float priority; /* the smaller, the higher priority */ } PART; typedef struct slot-info{ int sta; /* start time */ int fin; /* finish time */ struct slot-info * next; } SLOTINFO, *SlotPtr;

typedef struct job-info{ int job ; /* job # */ int OP ; /* operation # */ int mac ; /* machine # */ int es ; /* clock time */ int dur ; /* process time */ int ok; struct job-info * next; } JOBINFO, *JobPtr;

/** global variables **/ char * rule-name [I = {"SPT", "LPT", "FIFO","EDD", "LWR" , "DSLACK","SLACK/ROU, "CR", "LNOR"} ; int gv-numGrp; int no-mac , no- job , num-o f -op ; int jobNum, readyTime, dueDate; int gv-jobme [I001 ; int done, counter, rule-id;

PART mas-job [I001 , job [I001 ; SlotPtr m-head [lo01 ; JobPtr s-head, neXt-oP;

FILE *inPtr; FILE *mft-ptr, *mtd-ptr;

/** function prototypes **/ void input (void); void initialize (void); void report (void); void update (void); void find-schedule-next-op(void) ; int FAR PASCAL LibMain(HANDLE Module, WORD wDataSeg, WORD cbHeapSize, LPSTR IpszCmdLine) { return TRUE; int FAR PASCAL -export -WEP (int b~ystemExit) I return TRUE; 1 int FAR PASCAL rule-output ( ) { int i, j, k;

if((inPtr=fopen(INPUT-FILE, "r"))==NULL) return 101;

if ( (mft- ptr=fopen (MFT-OUT, "an)) ==NULL) return 102; if ((mtd-ptr=fopen(MTD-OUT, "a"))==NULL) return 103;

for(k=l; k<=gv-numGrp; k++) { fscanf (inPtr, "%d\nN,&no-job) ;

for(i=l; i<=no-job; i++) { fscanf(inPtr, "%d %d %d %d %d\nu, &jobNum, &gv-j obType [il , &mas-j ob [ il . no-op, &readyTime , &dueDate) ;

for (j=l; j<=mas-job [i].no-op; j++) I fscanf(inPtr, "%d %d %d\nv, &mas-job[il .pqlan[jl [ll , &mas -job [il .pplan[jI [31 , &mas-job [i].p-plan [j] [2] ) ; 1

mas-j ob [i]. r-t ime = readyTime ; mas-job[i] .d-date = dueDate;

mas -job [i] .priority = 0; mas -job[i] .f-time = 0; mas-job[il .tard = 0; 1

input () ;

for(ru1e-id=l; rule-id<=MAXRULE; rule-id++) t initialize (1 ;

while ( !done) i find-schedule-next-op ( ) ; update ( ) ;

report () ; 1 fprintf (mft-ptr, "\nl') ; fprintf (mtd-ptr, "\n" ;

free (s-head) ; free (next-op) ; for (i=l; i<=100; i++) free(m-head[il) ; 1 fclose (inPtr); fclose (mtdgtr); fclose (mit-ptr) ;

return 0; ) /* main0 */

void input 0 { int i = 0;

num-of-op = 0;

for (i=l; ic=no-job; i++) r 1 job [il = mas-job [il ; num-of-op += job[il .nobop; 1 } /* input 0 */

void initialize () I int i = 0; JobPtr tmpJP; SlotPtr tmpSP;

done = FALSE; counter = 0;

s-head = NULL; s-head = malloc (sizeof(JOBINFO) ) ; s- head->next = NULL;

for (i=l; i<=no-mac; i++) I tmpSP = malloc (sizeof(SLOTINFO) ) ;

tmpSP->sta = 0; tmpSP->fin = MAXINT; tmpSP->next = NULL; m-head [il = tmpSP; 1 for (i=l; i<=no-job; i++) { tmpJP = (JobPtr)malloc (sizeof(JOBINFO) ) ;

tmp~P->job= i; tmpJP->op = job [il .p-plan [I] [I]; tmpJP->mac = job [il .p-plan [l] [2]; tmpJP->dur = job [il .pqlan[l] [3] ; tmpJP->ok = FALSE; tmpJP->es = job [i] .r-time+l; tmpJP->next = s-head->next; s-head->next = tmpJP; 1

void find-schedule-next-op0 i int i ; int e-sta, total, work-rem, control; float h-pri; JobPtr tmpJP = NULL;

e-sta = MAXINT; tmpJP = s-head;

total = 0;

while (tmpJP->next ! = NULL) t tmpJP = tmpJP->next;

if(tmpJP->esc=e-sta && tmpJP->ok==FALSE) e-sta = tmpJP->es;

J

tmpJP = s-head;

case 1: /* SPT */

while (tmpJP->next ! = NULL)

I tmpJP = tmpJP->next;

1 break;

case 2: /* LPT */ 136

tmpJP = tmpJP->next;

if(tmpJP->es==e-sta && tmpJP->ok==FALSE) job [tmpJP->job1.priority = l.0/ (float)tmpJP-

break ;

case 3: /* FIFO */

while (tmpJP->next ! = NULL) { tmpJP = tmpJP->next;

if(tmpJP->es==e-sta && tmpJP->ok==FALSE) job [tmpJP->job1 .priority = job [tmpJP- > jobl . r-t ime ; }

break;

case 4: /* EDD */

while (tmpJP->next != NULL) { tmpJP = tmpJP->next;

if(tmpJP->es==e-sta && tmpJP->ok==FALSE) job [tmpJP->job].priority = job [tmpJP- >job1 .d-date; 1

break ;

case 5: /* LWR - least work remaining */

while(tmpJP->next != NULL) { tmpJP = tmpJP->next;

if(tmpJP->es==e-sta && tmpJP->ok==FALSE) { total = 0:

for (i=tmpJP->op;ic=job [tmpJP->job].no-op; i++) total += job [tmpJP->job].pglan[il [31 ; job [tmpJP->job1.priority = total;

break;

case 6: /* DSLACK */ case 7: /* SLACK/RO */ case 8: /* CR */

while(tmpJP->next != NULL) I tmpJP = tmpJP->next;

1 work-rem = 0; total = job [tmpJP->job1. d-date - tmpJP->es;

for (i=tmpJP->op;i<=job [tmpJP->job1 .no-op; i++) work-rem += job [tmpJP->job1.p-plan [il [31 ;

if (rule_id==7) job [tmpJP->job].priority = total - work-rem ;

if (rule_id==9) job [tmpJP->job1.priority = (float)(total- work-rem) / (float)(job [tmpJP->job].no-op - tmpJP->op+l);

if (rule-id==lO) if (total ! = 0) job [tmpJP-sjobl.priority = (float)total/ (float)work-rem; else job [tmpJP->job1.priority = 0; 1 1 break;

case 9: /* LNOR - least number of oprations remaining */

while (tmpJP->next ! = NULL) 138

job [tmpJP->job1.priority = job [tmpJP->job1.no-op

break;

hgri = MAXINT; tmpJP = s-head;

while (tmpJP->next ! = NULL) I tmpJP = tmpJP->next;

if(tmpJP->es==e-sta && tmpJP->ok==FALSE && job[tmpJP- >job] .prioritych-pri) I next-op = tmpJP; hgri = job [tmpJP->job] .priority; 1

tmpJP = s-head; control = FALSE;

while(tmpJP->next != NULL && control==FALSE) I tmpJP = tmpJP->next;

t tmpJP->ok = TRUE; control = TRUE;

k job[tmpJP->job].f-time = tmpJP->es + tmpJP->dur - 1; job [tmpJP->job]. tard = (trnpJP->es + tmpJP->dur) - job [tmpJP->job1.d-date; if (job [tmpJP->job].tard < 0) job[tmpJP->job1.tard = 0; 1

void update ( ) SlotPtr slot, tmpSP; JobPtr operation, tmpJP;

if(counter==num-of-op) done = TRUE: if ( !done) i slot = (SlotPtr)malloc (sizeof(SLOTINFO) ) ;

slot->sta = next-op->es + next-op->dur; tmpSP = m-headfnext-op->macl ;

while(tmpSP-ssta > next-op-ses) tmpSP = tmpSP->next;

while(! ((tmpSP->sta+next-op->dur<= tmpSP->fin) && (next-op->es+next-op->dur<= tmpSP->fin))) tmpSP = tmpSP->next;

slot->fin = tmpSP->fin; slot->next = tmpSP->next; tmpSP->fin = next-op->es; tmpSP->next = slot;

if (next-op->op ! = job [next-op-> job] .no-op) i operation = malloc (sizeof(JOBINFO) ) ;

operation->job = next-op->job; operation->op = next-op->op+l; operation->ok = FALSE; operation->mac = job[operation->job].p-plan[operation-

operation->es = next-op-ses + next-op->dur; operation->next = s-head->next; s-head->next = operation; }

while(tmpJP->next != NULL) { tmpJP = tmpJP->next; t tmpSP = m-head [tmpJP->macl ;

while(! (tmpSP->sta <= tmpJP->es)) tmpSP = tmpSP->next;

while((tmpSP->sta+tmpJP->dur> tmpSP->fin) 1) (tmpJP->es+tmpJP->dur> tmpSP->fin)) tmpSP = tmpSP->next;

void report () I int i; int max-time, min-time; int lv-tot-td = 0; int lv-tot-ft = 0; float lv-mft = 0; float lv-mtd = 0;

max-time = 0; min-time = 32767;

for (i=l; i<=no-job; i++) I if(max- time < job[i] .£-time) max-t ime = job [ i] . f-t ime ; if (min-time > job[i] .r-time) min- time = job [i] .r-t ime;

lv-tot-td += job [il . tard;

lv-tot-it = max-time - min-time;

lv- mft = (f1oat)lv-tot-ft/(float)no-job; lv-mtd = (f1oat)lv-tot-td/(float)no-job;

fprintf(mft_ptr, "%-10.5f ", lv-mft);

fprintf (mtd-ptr, "%-lo.5f 'I, lv-mtd) ;

} /* report () */ Appendix B

Sample Test Data for Single-machine Scheduling Problem

Queue Size = 10, Current Time = 5216, PreceQng Job Type = 4 Job Number Job Type Arrival Time Process Time Due Date 1 1 5 139 4 5212 2 6 5 142 8 525 1 PP pp 3 3 51521 5 5232 4 4 5157 3 5229 5 4 5169 3 5240 6 5 5 175 10 5289 7 7 5 180 15 5368 8 1 5195 4 5271 9 4 5209 3 5281 10 5 5215 10 5321

Queue Size = 10, Current Time = 2055, Precedtng Job Type = 7 Job Number Job Type Arrival Time Process Time Due Date 1 7 1958 15 2 143 - 2 1 2002 4 2072 3 4 2006 3 2078 4 2 2010 6 208 1 5 7 2015 15 2204 6 3 2017 5 2101 7 1 2028 4 2099 8 5 2032 10 2140 9 4 2047 3 2119 10 6 2048 8 2164

Queue Size = 10, Current Time = 1280, Preceding Job Type = 5 Job Number I Job Type Arrival Time I Process Time I Due Date 1 I 7 1157 15 1337

Queue Size = 10, Current Time = 5506, Preceding Job Type = 1 Job Number Job Type Arrival Time Process Time Due Date 1 5 5442 10 5554 2 4 5446 3 5518 3 4 5461 3 5532 4 6 5462 8 5578 5 4 5465 3 5536 6 1 5468 4 5534 7 3 5480 5 5561 8 1 5491 4 5570 9 5 5498 10 5613 10 4 5505 3 5573

Queue Size = 10, Current Time = 4077, PreceQng Job Type = 1 JobNumber I Job Type I Arrival Time I Process Time I Due Date 1 I 7 4001 15 I 4190 Queue Size = 10, Current Time = 7669, Preceding Job Type = 3 Job Number Job Type Arrival Time Process Time Due Date 1 6 7644 8 7765 2 4 7646 3 7718 3 4 7649 3 7719 4 7 7650 15 783 1 5 1 7652 4 7727 6 6 7653 8 7776 7 6 7653 8 7762 8 4 7667 3 7736 9 6 7667 8 7784 10 3 7668 5 775 1

Queue Size = 10, Current Time = 6596, Precedmg Job Type = 5 Job Number Job Type Arrival Time Process Time Due Date 1 2 6526 6 6602 2 6 6532 8 6647 3 1 6539 4 6615 4 3 6542 5 6626 5 7 6543 15 6729 6 4 6547 3 6617 7 4 6561 3 6632 8 6 6569 8 6682 9 2 6579 6 6673 10 1 6587 4 6662

Queue Size = 10, Current Time = 6095, Preceding Job Type = 5 Job Number Job Type Arrival Time Process Time Due Date 1 7 5946 15 6130 2 3 6038 5 6121 3 2 6042 6 6126 4 1 6046 4 6123 5 4 6058 3 6130 6 1 6073 4 6149 7 3 6084 5 6166 8 4 6088 3 6158 9 4 6091 3 6162 10 2 6094 6 6174 Appendix C

Sample Test Data for Multiple-machine Scheduling Problem 148

Number of Jobs: 18

Due Date = 89

Due Date = 102

- Athlbntea 1 Opt # Pr ( Jlsch # -obi 12 eadyTime=Ol ; 1 ; 1 ; 1111 ue Date = 43 ue Date = 148 46 8

- 13 Opt = 10 :ady Time = 0 ue Date = 116 ue Date = 53

- 14 Opt=8 opt = 10 eady Time = 0 sady Time = 0 eady Time = 0 Iue Date = 102 ue Date = 87 ue Date = 61

- 15 opt = 8 Opt = 8 :ady Time = 0 eady Time = 0 ue Date = 87 Opt = 7 14 4 ue Date = 47 eadyTime=O 2 6 3 ue Date = 124 3 8 1 45 2 5 4 10 - 16 Opt = 10 11 181 111 cadyTime=Ol ; ; ; 11 Opt = 8 ue Date = 154 2

- Number of Jobs; 22 IIJ~~#I~ttribotes IOP~#IPT I ~.eh#ll

Ready Tme = 0 Due Date = 54

7 ,J

Opt = 10 1 lo141 i4i " 11 .eady Tme = 0 lue Date = 55

8 16 3 Ready Tune= 0 2 Due Date = 51 3 5 9 ""'-"I1 ""'-"I1 Ready Tie= 0

Due Date = 153

20 #0pt=7 Ready Tie= 0 Due Date = 82

A" . -- 6 #Opt=lO 1 181 1 t ReadyTme=O 2 16 1 3 Due Date = 53 3 131 2

Ready Tune = 0 Due Date = 88 3 4

Due Date = 101

!Opts9 leady Tme = 0 heDate = 151 Ready Tme = 0 5