<<

OPERATION ASSIGNMENT WITH BOARD SPLITTING AND MULTIPLE

MACHINES IN ASSEMBLY

by

SAKCHAI RAKKARN

Submitted in partial fulfillment of the requirements

For the degree of Doctor of Philosophy

Dissertation Adviser: Dr. Vira Chankong

Department of Electrical and Science

CASE WESTERN RESERVE UNIVERSITY

May, 2008 CASE WESTERN RESERVE UNIVERSITY

SCHOOL OF GRADUATE STUDIES

We hereby approve the thesis/dissertation of

______

candidate for the ______degree *.

(signed)______(chair of the committee)

______

______

______

______

______

(date) ______

*We also certify that written approval has been obtained for any proprietary material contained therein. i

Table of Contents

Table of Contents…………………………………………………………………………..i

List of Figures……………………………………………………………………………..v

List of Tables…………………………………………………………………………….vii

Acknowledgements………………………………………………………………………..x

Abstract…………………………………………………………………………………...xi

1 Introduction……………………………………………………………………………...1

1.1 The Overview of Printed Circuit Board Assembly…………………………....1

1.2 Planning and Process for PCB Assembly……………………………………..2

1.3 Problem Statement and Rationale...…………………………………………...6

1.4 Research Objective……………………………………………………………8

1.5 Outline of the Thesis…………………………………………………………..8

2 Literature Review…………………………………………………………………...….10

2.1 Models for Generalized Operation Assignment for PCB Assignment

Problems………...... 10

2.2 Generic Problems with Similar Model Structure……….…………………....13

Generalized Assignment Problem (GAP)………………………….…….13

Universal Facility Location Problem (UFLP)……………………………15

2.3 Solution Strategies and Methods for Combinatorial Optimization……...…...17

Binary Integer Programming (BIP)……………………...………………17

Branch-and-Bound……………………………………………………….18

Knapsack Problem……………………………………………………….20

Decomposition and Duality……………………………………………...21

ii

Lagrangian Relaxation and Subgradient Method (LR+S)……….………24

Linear Programming……………………………………………….…….28

Heuristics…………………………………………………………….…..29

2.4 Existing for Operation Assignment of PCB Assembly…………31

Greedy board heuristics for Single Automatic …………………32

Greedy Board with Multiple Automatic …………..34

Stingy Component heuristics for Single Automatic Machine…………...36

Stingy Component Algorithm with Multiple Automatic Machines……..38

Lagrangian Relaxation Heuristic with Single Machine………………….39

Lagrangian Relaxation Heuristic with No Board Splitting

and Multiple Machines…………………………………………………..44

3 Solution Algorithms for Multiple Machines with Board Splitting…………………….47

3.1 The Model Revisited…...... …………………………………………………..47

Commonality Ratio and Problem Size…………………………………...52

3.2 The Proposed Solution Strategy…..………………………………………….53

3.3 Finding Multipliers………………………..………………………...……….60

LP Relaxation LPr………………………………………………………..61

Lagrangian Relaxation LR……………………………………………….62

3.4 The Final Step: Searching for the Primal Solution….……………………….69

Lower Bound Maintaining Algorithm (LBM)…………..……………….69

LBM Heuristics + Greedy Board ……………………..…………………72

LBM Heuristics + Greedy Component ………………….....……………74

Problem Space Search Method…………………………………………..75

iii

3.5 Implementing the LBM Algorithm……………….………….………………77

3.6 Computation Complexity…………………………………………………….79

4 Test Problems and Computation Results………………………………………………82

4.1 Test Problems...………………………………………………………………82

4.2 Computational Results……………………………………………………….85

4.2.1 Performance Tests………………………………………………….85

Single Machine Test……………………………………………..87

Multiple Unidentical Machines…………………………………..88

Multiple Machines Identical Machines…………………………..92

Multiple Machines with Unidentical/Identical Machines………..95

More Resuls: LBM vs.CPLEX for Identical Machines………….98

4.2.2 Sensitivity and Robustness Tests…………………………….…...100

Results and Analysis for Unidentical Multiple Machines……...101

Results and Analysis for Identical Multiple Machines….……...107

Results and Analysis for Unidentical/Identical

Multiple Machines…………………………………………..….113

5 Actual Case Study and Results…………………………………………………….…120

5.1 Introduction to C.Y. Tech Co., Ltd…………………………………………120

5.2 Prepared Data and Information……………………………………………..124

Demand Data and Product Description…………………………………124

Process Information………………………………………………...... 127

Pressing and Setup Time Data………………………………………….129

5.3 Existing Planning Method………………………………………………….130

iv

5.4 Results and Performance……………………………………………………131

5.5 Final Comments…………………………………………………………….134

6 Conclusions and Future Work…………………………………………………….….136

6.1 Conclusions…………………………………………………………………136

The Problem Addressed……………………………..………………….136

The Solution Strategy Used…………………………………………….136

Testing the Claims……………………………………………………...140

6.2 Future Works……………………………………………………………….145

Bibliography……………………………………………………………………….…...146

v

List of Figures

Figure 1.1: Auto Insertion for PCBs………………………………………...2

Figure 1.2: Overall Production Planning of PCB: Decision/Information relationships….3

Figure 1.3: Relationships between three decision phases in PCB Assembly……………4

Figure 1.4 Typical Assembly Process for PCBs………………………………………….5

Figure 3.1 Conception of LBM-Based Feasible Solution Finder…………………..……72

Figure 3.2 Flow Process for LBM algorithm………………………………………...…..79

Figure 4.1 Average CPU Time between LBM and CPLEX for Unidentical Processes....89

Figure 4.2 Average CPU Time between LBM and CPLEX for Identical Processes…....92

Figure 4.3 Average CPU Time between LBM and CPLEX

for Unidentical/Identical Processes…..………………………..…..…..……..95

Figure 4.4 Average Duality Gap between LBM and GRD for Problem Type A………102

Figure 4.5 Average Duality Gap between LBM and GRD for Problem Type B………103

Figure 4.6 Average Duality Gap between LBM and GRD for Problem Type C………104

Figure 4.7 Average Duality Gap between LBM and GRD for Problem Type D………105

Figure 4.8 Comparing Average Duality Gap between

Problem Types and Sized Problem……..…………………..……………….106

Figure 4.9 Average Duality Gap between LBM and GRD for Problem Type A………108

Figure 4.10 Average Duality Gap between LBM and GRD for Problem Type B…..…109

Figure 4.11 Average Duality Gap between LBM and GRD for Problem Type C…..…110

Figure 4.12 Average Duality Gap between LBM and GRD for Problem Type D……..111

Figure 4.13 Comparing Average Duality Gap

between Problem Types and Sized Problem………………………………112

vi

Figure 4.14 Average Duality Gap between LBM and GRD for Problem Type A……..114

Figure 4.15 Average Duality Gap between LBM and GRD for Problem Type B…..…115

Figure 4.16 Average Duality Gap between LBM and GRD for Problem Type C…...…116

Figure 4.17 Average Duality Gap between LBM and GRD for Problem Type D..……117

Figure 4.18 Comparing Average Duality Gap

between Problem Types and Sized Problem………………………….……118

Figure 5.1 Business Flow Chart of C.Y. Tech Co., Ltd………………………………...122

Figure 5.2 Process Flow Chart of Auto Insertion Technology……………………...….123

Figure 5.3 Layout of Printed Circuit Board Model RVD-164………………………….124

Figure 5.4 Axial Inserter 6292 VCD-DH6 Dual Head by Universal…………….……..127

Figure 5.5 Radial Inserter VC-5B by TDK……………………………………………..128

Figure 5.6 Axial Sequencer Machine by Universal…………………………………….128

Figure 5.7 Time Comparison of the Four Algorithms………………………………….132

Figure 5.8 Total Production Time: LBM v.s. Existing Method……………...…………133

vii

List of Tables

Table 3.1: The Procedure of LBM Algorithm………………………………………..….78

Table 4.1: Characteristics of Test Problem Designs……………………………………..84

Table 4.2 Results: Single Machine with identical processing and set up times…………87

Table 4.3 Results: Single Machine with unidentical processing and set up times………87

Table 4.4 Results: Problem Size 3×20×5…………………………………………….…..89

Table 4.5 Results: Problem Size 3×100×30………………………………………….…..90

Table 4.6 Results: Problem Size 4×100×30………………………………………….…..90

Table 4.7 Results: Problem Size 5×100×30……………………………………………...90

Table 4.8 Results: Problem Size 3×1000×100……………………………………….…..91

Table 4.9 Results: Problem Size 4×1000×100……………………………………….…..91

Table 4.10 Results: Problem Size 5×1000×100………………………………………….91

Table 4.11 Results: Problem Size 3×20×5……………………………………………….92

Table 4.12 Results: Problem Size 3×100×30……………………………………………93

Table 4.13 Results: Problem Size 4×100×30…………………………………………….93

Table 4.14 Results: Problem Size 5×100×30……………………………………………93

Table 4.15 Results: Problem Size 3×1000×100…………………………………….……94

Table 4.16 Results: Problem Size 4×1000×100…………………………………….……94

Table 4.17 Results: Problem Size 5×1000×100……………………………………….…94

Table 4.18 Results: Problem Size 3×20×5……………………………………………….96

Table 4.19 Results: Problem Size 3×100×30…………………………………………….96

Table 4.20 Results: Problem Size 4×100×30…………………………………………….96

viii

Table 4.21 Results: Problem Size 5×100×30…………………………………………….97

Table 4.22 Results: Problem Size 3×1000×100………………………………………….97

Table 4.23 Results: Problem Size 4×1000×100…………………………………………97

Table 4.24 Results: Problem Size 5×1000×100………………………………………….98

Table 4.25 Results: Problem Size 5×1000×100……………………………………….…99

Table 4.26 Results: Problem Size 4×1000×100………………………………………….99

Table 4.27 Results: Problem Size 3×1000×100……………………………….…..……100

Table 4.28 Duality Gap between LBM and GRD for Problem Type A…………….….101

Table 4.29 Duality Gap between LBM and GRD for Problem Type B………………..102

Table 4.30 Duality Gap between LBM and GRD for Problem Type C………………..103

Table 4.31 Duality Gap between LBM and GRD for Problem Type D………………..104

Table 4.32 Average Duality Gap for All Problem Types

with Different Sized Problem…….………………………………………...105

Table 4.33 Duality Gap between LBM and GRD for Problem Type A………………..107

Table 4.34 Duality Gap between LBM and GRD for Problem Type B………………..108

Table 4.35 Duality Gap between LBM and GRD for Problem Type C……………..…109

Table 4.36 Duality Gap between LBM and GRD for Problem Type D……………..…110

Table 4.37 Average Duality Gap for All Problem Types

with Different Sized Problem………………………………………………111

Table 4.38 Duality Gap between LBM and GRD for Problem Type A…………..……113

Table 4.39 Duality Gap between LBM and GRD for Problem Type B……………..…114

Table 4.40 Duality Gap between LBM and GRD for Problem Type C……………..…115

Table 4.41 Duality Gap between LBM and GRD for Problem Type D………..………116

ix

Table 4.42 Average Duality Gap for All Problem Types

with Different Sized Problem………………………………………....…..117

Table 5.1 Bill of Material of Printed Circuit Board Model RVD-164…………………125

Table 5.2 (Continue) Bill of Material of Printed Circuit Board Model RVD-164…..…126

Table 5.3 Average Processing and Setup Time of Insertion Process…………………..129

Table 5.4 Results of all Algorithms in Each Demand Period…………………………..131

Table 5.5 CPU Times of all Four Algorithms…………………………………….…….132

Table 5.6 Percent above Optimal for Three Algorithms…………………………….….132

Table 5.7 Percent Saving of Production Time between LBM and Existing Method…..133

x

Acknowledgements

Many thanks to my advisor, Prof. Vira Chankong. He has helped transform my idea about research from an obscure, magical process to a concrete and fascinating process.

He provided topic area and his direction in defining the scope of this work were invaluable.

My thanks to Prof. Kenneth A. Loparo, Prof. Narasingarao S. Sreenath, and Prof.

Kamlesh Mathur for their services as the committee members and the valuable time for reading this manuscript.

My deepest gratitude to Kasem Bundit University for providing all financial support of my Ph.D. in the Systems and Control program in Electrical Engineering and

Computer Science at Case Western Reserve University.

I gratefully acknowledge to Jintana Panuwanakorn, and Nukul Tantikul of C.Y. Tech

Co., Ltd. Thailand and Dr. Akajate Apikajornsin for supporting and providing the data of case study in this dissertation.

There are many others, whose support was invaluable. My wife, my parents and my friends - Dr. Danthai Thongphiew and Dr. Suparerk Janjarasjitt. Thanks to you and all others who I have not listed here.

xi

Operation Assignment with Board Splitting and Multiple Machines in

Printed Circuit Board Assembly

Abstract

By

SAKCHAI RAKKARN

This research considers an operation assignment problem arising from printed circuit board (PCB) assembly process. We focus on the case most prevalent in today’s

PCB where multiple automatic insertion machines are available and a board may be set up on more than one machine. We aim to develop an efficient algorithm that can comfortably handle industrial-sized problems. A challenging problem is how to assign component types to machines, board types to machines, and a particular component on a board to a particular machine so as to minimize the total assembly time. The resulting binary integer program (BIP) has a unique with weakly coupling location-type constraints and capacity constraints. Aiming to develop a solution method to obtain high-quality solutions in an acceptable time, we exploit the particular structure of the BIP model to the fullest possible. Decomposition, relaxation (Lagrangian and LP), and a strategic neighborhood search consisting of greedy board/component, problem space search, and a newly developed variable fixing heuristics are used to form the new method.

We test the performance of our proposed method by generating almost five hundred of carefully designed test problems. We also use a real world case study

xii graciously provided by C.Y.Tech. CPLEX, Greedy Board heuristics, and a special heuristics used by C.Y.Tech are used to compare performance with the proposed method.

Test results consistently indicate that the proposed method is a strong candidate for use in the PCB assembly industry, providing the best compromised performance in terms of solution quality and speed among all methods tested. For all test problems and the real case study, it produces optimal or near optimal solution with % above optimal or duality gap averaging 1.2% or less and with computational time within a few hundred seconds.

Its computational time will increase linearly, but slowly, with problem size indicating that it will comfortably handle problems much larger than the largest test problem. Finally, if desired, the proposed method can help C.Y. Tech to appreciably increase its throughput without additional capital investment by being able to cut production time by 16%.

1

1 Introduction

1.1 Overview of Printed Circuit Board Assembly

In today’s high-tech world, electronic devices and gadgets are everywhere touching all our major life activities of working, learning and living, and significantly affecting our quality of life. With Printed Circuit Board (PCB) as the backbone, the popularity and pervasiveness of such high-tech devices would not have been possible without the ability to produce high-volume of customized PCB at the required speed and quality.

Printed Circuit Board assembly involves inserting and electronic components into PCBs. The most important factor affecting cost, efficiency and quality of PCB assembly is component insertion, which is carried out either manually or by automatic or semiautomatic machines. Modern automatic insertion machines (illustrated in Figure 1.1) generally can do the jobs faster with greater precision and reliability at less per unit cost than the manual process. However their capacity is usually limited and insertion of nonstandard or special components normally has to be performed manually.

In high-volume production, a combination of automatic/semiautomatic and manual insertion processes is used. Again the automatic/semiautomatic process handles standard components to achieve speed and efficiency, while the manual process handles the excess from the automatic process’ capacity as well as specialized components.

2

Figure 1.1: Auto Insertion Technology for PCBs

1.2 Planning and Process for PCB Assembly

The process for printed circuit board assembly is part of the overall production planning process depicted in Figure 1.2. Essentially, the PCB circuits have to be designed along with the specification of Bills of Materials (BOM), and equipment requirements. This is followed by a process planning which consists of production planning, scheduling and shop-floor control planning. The final step is operations planning consisting of feeder arrangements, placement sequencing, NC programming and the actual assembly. 3

CIRCUIT CARD CIRCUIT CARD CIRCUIT CARD DESGN DESGN DESGN

DESIGN DATA BOM DATA EQUIP. SPECS.

STATIC PROCESS PRODUCTION REQUTS PLANNING

PROD’N PLAN SCHEDULING

PROCESS PLANNING PROCESS TIMES

SHOP FLOOR CONTROL SCHEDULES

FEEDER PLACEMENT ARRANGEMENT SEQUENCE

NC PROGRAMMING

NC CODE

ASSEMBLY AUTOMATION

Figure 1.2: Overall Production Planning of PCB: Decision/Information relationships [1]

Within the process planning of PCB assembly there are three main decision- making phases-- grouping, inserting, and sequencing as shown in Figure 1.3. Grouping makes use of cellular by selecting machine groups and part families and assigning part families to machine groups. Allocation assigns components to machines 4 when the corresponding machine group has more than one machine. Finally,

Arrangement and Sequencing arranges component feeders and sequences placement operations for each machine and printed circuit board. The hierarchical relationship between these three decisions as shown in Figure 1.3 indicates tight coupling between the grouping and allocation decisions especially when we have multiple boards and multiple machines. On the other hand, there is minimal coupling between the arrangement/sequencing decision and other type of decisions because only individual boards and machines are dealt with at this level.

Assembly Product Equipment Descriptions Characteristic GROUPING (of all PCBs and machine)

PCB into Machines Families into Groups

Families and groups Line and PCB cycle time result

ALLOCATION (for each family over all machines in group)

Component Types to Machine

Allocation Machine and PCB cycle time result

ARRANGEMENT/ SEQUENCING (for each machine for each family)

Assign Sequence Component Placements Types to for Each Feeder PCB Slots

Figure 1.3: Relationships between three decision phases in PCB Assembly [2] 5

Figure 1.4 below illustrates a complete assembly process for PCBs [3] including component presentation, repair of faulty insertions, and touch-up for faulty soldering.

Boards move through the steps as indicated by the horizontal lines in the figure. The vertical lines indicate the flow of components from inventory to the insertion stations.

The second insertion station in the assembly process is VCD insertion. This refers to the automatic insertion of axial- components, also called VCD (variable center distance) components. These include , , and . The VCD station performs feeder sequencing before insertion. In addition, may be employed to perform insertions of components with nonstandard shapes.

Components Sequence Components Components Components

Inspect and Inspect and Auto Inspect and repair VCD repair insertion repair A

Components Components Components

Manual 1 Retro- Inspect and Touch- A fit repair Insertion Clean up

B Components Components

Manual 2 Inspect and Test Final repair B repair

Figure 1.4 Typical Assembly Process for PCBs. 6

1.3 Problem Statement and Rationale

This research focuses specifically on the decision making on Allocation

(assigning component types to various machine groups) and Arrangement/Sequencing

(arranging component types on feeder slots and placement sequencing of components and board for insertion operations on machines). This constitutes the last two boxes in Figure

1.3. The emphasis will be placed on how to do these assignment and sequencing tasks efficiently and with minimum cost, particularly for the cases where we have multiple board types with a large number of boards for various board type, multiple automatic/semiautomatic machines, and a large number of overall component types. In these cases, existing techniques, which are mostly heuristic-based, often produce solutions that are far from optimal. We will seek a new assignment method that will help reduce the assignment costs as much as possible (if not optimal), thereby significantly increasing the efficiency and throughput of the PCB assembly process.

Detailed Description of the PCB Process and the Research Problem

Consider an insertion process associated with an operations assignment. A mix of board types requires insertion of a number of different components. The insertion processes used are automatic/semiautomatic and manual. A board can pass through either one or both of these processes. For the automatic/semiautomatic insertion process: (1) there may be one or more machines which are identical, unidentical, or mixed; (2) each automatic/semiautomatic insertion machine has a limited capacity in the number of different types of components it can handle; (3) a board can pass through one or more machines, and (4) for any automatic machine that requires component sequencing, the 7 capacity of the sequencer serves as the capacity of the automated machine. The total cost of the PCB assembly process consists of the processing cost of inserting components on a board by a machine/process and the cost of setting up a board on a machine/process. Both per unit processing and setup costs varies with per unit processing time and setup time, respectively. Since automatic/semiautomatic processes/machines are faster (in both insertion and setup) than the manual process, they would have lower processing and setup costs. However, each automated machine has a limited capacity in terms of the number of component types it can hold at one time, and there are also a limited number of such machines available. The manual process on the other hand can handle all component types and in any volume (although of course at slower speed and higher costs). The key questions at this phase of PCB assembly are which component type is to be assigned to which machine, which board is to be set up on which machine, and which component on a board is to be inserted by which machine so as to minimize the total cost. In the above, we allow a board to be splitted (i.e. some components on a board can be inserted by one machine, while the rest can be inserted by other machines). This type of problems can be formulated as a linear 0-1 program representing a special assignment problem as will be seen in the next chapter. An example application is illustrated in [4] where the PCB assembly at Hewlett-Packard (HP) is used to demonstrate the problem and solution techniques. However the cases considered in [4] only have one automatic machine and a board cannot be splitted. In practice, the number of automatic machines used will be more than one, a board can be splitted if it is beneficial to do so, the number of board types may be in hundreds, the number of component types may be in thousands, the volume of the boards required is large, and the number common components on different 8 board types--commonality ratio--- may be high. These are the situations when existing methods, if any, often produce poor results. This is precisely why, in this research, we would like to find a more efficient method to solve the problem and one that can be used in real industrial settings.

1.4 Research Objective

The principal aim of this research is to develop an efficient algorithm to solve the operation assignment with board splitting and multiple machines in PCB assembly with all characteristics of PCB assembly and real practical problem outlined above. This problem is motivated in part by a real PCB assembly company---C.Y. Tech Co., Ltd

(Thailand)—which has a reasonably large volume of business serving Thailand and the

East Asia region. The company currently uses three automatic insertion machines with two being identical, in addition to the manual process. We will therefore specifically investigate how we can help C.Y. Tech reduces its PCB assembly cost and increases its throughput.

1.5 Outline of the thesis

In this chapter we have provided an overview of the process planning and production planning for PCB industry in general, and the operations assignment of PCB assembly in particular. Various mathematical models pertinent to the research problem

(such as an general assignment problems, facility location problems) along with existing methods and strategies for solving them will be described in Chapter 2. In chapter 3, we will focus on the specific mathematical formulation for the research problem, explore its 9 special structure and use it to develop an efficient solution method by resourcefully combining partitioning/decomposition, Lagrangian relaxation, Greedy board/component heuristics, and problem space search strategy. Also described in Chapter 3 is actual implementation of the proposed algorithm and a brief discussion on computational complexity. In chapter 4, a variety of randomly generated test problems is used to test the performance (percent duality-gap or percent above-optimal, and computational time) of the proposed algorithm. In Chapter 5, we apply our algorithm to the real case study at

C.Y. Tech Co., Ltd. and show how much we can reduce the PCB production cost at C.Y.

Tech compared to what they are doing now. In Chapter 6, we conclude with a summary of the proposed algorithm for operation assignment of PCB assembly, discuss its benefits and limitations, and suggest issues for future research. 10

2 Literature Review

In this chapter, related models and their solution strategies are reviewed in order to learn what exist and what ideas are available that may be useful in solving our specific problem in PCB assembly. To begin, it is prudent to state the mathematical model for

PCB assembly for the most general case, observe its special structure, state generic class of problems with similar structure, and briefly discuss popular solution strategies for solving those generic classes of problems.

2.1 Models for Generalized Operation Assignment for PCB Assembly Problems

The earliest PCB assembly problem [5-6] is concerned with placement sequencing and feeder configuration. The problem is formulated as (1) a Quadratic

Assignment Problem (QAP) and solved by using Metaheuristics and (2) a production planning and scheduling formulation to determine the component-machine allocations as well as PCB sequence. In the latter, two types of objective functions are used separately: one minimizing the number of changeovers (hence minimizing set up time) and the other minimizing processing time.

A difficult issue that often arises in PCB assembly is how circuit boards should be grouped for manufacturing to minimize the total assembly time. This is a combination of setup time and processing time under many problem characteristics. An overall approach for addressing the board grouping problem is given in [7-11]. Models and algorithms for solving PCB assembly to determine an optimal operation assignment of board types to machine groups, allocation of component feeders to individual machines, and production sequences are described in [4, 11-16]. Most of these works assume a single 11 automatic/semiautomatic machine. The treatment of PCB assembly with multiple machines first appeared in [4, 7].

In this research, we seek to answer the following questions with regard to the process planning phase of PCB assembly: which component type to be assigned to which machine(s): which board is to be set up on which machine(s); and which component on a board is to inserted by which machine so as to minimize the total cost. Accordingly, the mathematical model that will allow the questions to be answered can now be written as:

Choose the component and board assignments so as to

Minimize the sum of processing cost/time and setup cost/time in PCB assembly

Subject to

a) Only components required on a given board get inserted on that board and

each component inserted is done once and only once

b) A component on a given board get inserted by a machine only if the board is

set up on that machine

c) A component on a given board get inserted by a machine only if the machine

is set up to handle that component

d) The number of component types to be set up on an automatic machine cannot

exceed its stated capacity.

First we define the decision variables and parameters

For machine iI∈∈∈ , component type jJ , and board type kK , define:

Decision variables: ⎧1 if Component type j in Board type ki is assigned to be inserted by Machine xijk = ⎨ ⎩ 0 otherwise 12

⎧1 if Board type ki is set-up on Machine yik = ⎨ ⎩ 0 otherwise ⎧1 if Component type ji is set-up on Machine zij = ⎨ ⎩0 otherwise Parameters: ⎧1 if Component type jk is required on Board type rjk = ⎨ ⎩ 0 otherwise

Ni = max number of Component types that can be set up on Machine i

cjiij = per unit cost to insert Component type by Machine

vjk = total number of Component type j required for Board type k

skiik = per unit cost to set up Board type on Machine

dk = number of boards produced (demand) during the planning horizon

Then the corresponding mathematical model for the PCB assembly problem is:

Minimize∑∑∑ cij v jk x ijk+ ∑∑ s ik d k y ik (1) ijk ik

subject to

∑ xrjkijk=∀ jk ; , ( 2) i

yxijkik≥∀ ijk ; , , () 3

zxij≥∀ ijk ; ijk , , () 4

∑ zNiij≤= i ; 1,2,..., I − 1 () 5 j

xijkijk ∈∀{}0,1 ; , , ( 6)

yikik ∈∀{}0,1 ; , ( 7)

zjkij ∈∀{}0,1 ; , ( 8)

Equation/inequalities (2) through (5) highlight the stated requirement (a) to (b), while (6)-

(8) express the YES-NO decision nature of the three groups of variables involved. The 13 objective function (1) to be minimized is the total cost/time to complete a PCB assembly job in the planning horizon. As an alternate objective function, another common metric to be minimized is the makespan. Minimizing makespan results in more balance of workload on each machine. This in turn to a loosening of the bottleneck and hence an increase in throughputs. Previous models and algorithms dealing with the minimization of makespan can be found in [2, 17-20]. In this work, we will focus on minimizing the total cost/time objective and not on minimizing makespan.

2.2 Generic Problems with Similar Model Structure

The model for PCB assembly (1)-(8) is a special form of assignment problem. It is of interest here then to look at those generic problem formulations that have similar structures to our model so that we may learn what useful solution strategies are available for possible adaptation. These generic problems include generalized assignment problems, uncapacitated facility location problems, and capacitated facility location problems. Generalized assignment problems have the structure most similar to the PCB assembly model in (1)-(8).

Generalized Assignment Problem (GAP)

A generalized assignment problem is a classical combinatorial optimization problem in which we try to assign n tasks to m agents so that the total assignment cost is minimized, that each task is assigned to exactly one agent and that the capacity of the agent cannot be exceeded. It has a variety of real word applications including facility location, loading, scheduling, routing, allocation, machine assignment, and supply chains. 14

The problem has been studied since the late 1960s. It is known to be NP-hard but computer codes based on various heuristics and various solution strategies for practical application have been introduced since early 1970s.

GAP can be formulated as a 0-1 integer linear program (ILP). Let n be the number of tasks to be assigned to m agents (n ≥ m) and define N = {1, 2, 3,…, n}. We define the requisite data elements as follows: cij = cost of task j being assigned to agent i rjk = amount of resource from agent i required to perform task j

The decision variables are defined as:

⎧1 if taskj is assigned to agent i xij = ⎨ ⎩ 0 otherwise

The 0-1 LIP model may then be written as:

mn GAP: minimize∑∑cxij ij () 9 ij==11 n subject to:∑ rxij ij≤∀ b i , i () 10 j=1 m ∑ xjNij =∀∈1,() 11 i=1

xijij =∀0or1, , () 12

The objective function (9) is the total cost of the assignments. Constraint (10) enforces the resource limitation for each agent. Constraint (11) ensures that each job is assigned to exactly one agent, which is similar to Constraint (2) in the PCB assembly model

(although in the latter some of the tasks (components) need not be assigned to a combination of board-machine “agent”. Publications [21-25] proposed algorithms to solve GAP based on Lagrangian Relaxation (LR); branch-and-bound (based on LR, LP 15 relaxation, and valid cuts), problem space search, and column generation. [26-28] proposed a tabu search algorithm and a path relinking approach for solving GAP. There are many variants and extensions of GAPs to fit a variety of real world applications. [29] provided an extensive discussion on the following variants of models and algorithms:

• 3D Bottleneck Assignment Problem and its Variants

• Bi-criteria Assignment Problem

• Makespan minimizing GAP

• Time minimization assignment problem and a Lexicographic-search algorithm

• Two-stage time minimizing assignment problem

• Bi-level time minimizing assignment problem

Universal Facility Location Problem (UFLP)

The model problem is to find locations to locate up to m warehouses so that demand of n customers can be satisfied from these warehouses at the minimum total cost.

As described in [30-32], facility location problems can be either uncapacitated or capacitated. The most basic problem is the uncapacitated facility location problem which can be formulated as follows:

Minimize∑∑∑ fi z i+ c ij x ij (13) iI∈∈∈ iIjJ

subject to

∑ xjJij =∈1; , (14) iI∈ zxiIjJ≥∈∈; , , (15) iij ziIi ∈∈{}0,1 ; , (16)

xiIjJij ≥∈∈0; , , (17) 16

where

xij = fraction of demand of customer j shipped from facility at location i

zi = 1, if a facility is built at location i, and 0, otherwise.

fi = fixed cost if a facility is built at location i

cij = cost of transporting the entire demand of customer j from facility i

I = {1,…,m}---the set of possible locations to build facilities,

J = {1,…, n}---the set of customers,

The objective function (13) is to minimize the total cost which is composed of the fixed construction cost and the variable transportation cost. Constraints (14) and (17) ensure that the demand for each customer is met by the complement of facilities built. Constraint

(15) makes certain that shipment to customers cannot be made from a location where a warehouse has not been built. Finally, constraint (16) represents the YES-NO decision whether to built or not-to-built a warehouse at location i. For a capacitated version, there are additional restrictions reflecting the capacity of warehouse built at each location. That is, additional constraints of the form:

n ∑ xbij≤= i for i 1,.., m (18) j=1 express the maximum shipment that can be made from a location i. In addition, there may be upper bounds xij ≤ uij where 0 < uij ≤ 1 indicating the maximum shipment that can be made from warehouse at location i to customer j.

The similarity in structure between the PCB assembly problem and the UFLP problem lies in the objective function--(1) v.s. (13); and do-only if-built constraints—(3)-

(4) v.s. (15). The major difference between the two problems is that the xij in the UFLP 17

are continuous whereas the xijk and all other variables in the PCB assembly problem are binary.

2.3 Solution Strategies and Methods for Combinatorial Optimization

What methods are available to solve the general LIP, Binary IP, GAP, UFLP, and some other basic combinatorial optimization problems? Which of those techniques that we can use or modify to solve the operation assignment problem in PCB assembly? This will be briefly reviewed next.

Binary Integer Programs (BIP)

Because the model (1)-(8) considered in this thesis is a special form of BIP, we look into the vast literature on solution strategies for BIP. BIP in the most general form is a combinatorial optimization problem that is often NP-hard. Easy BIPs are those whose solutions lie exactly on a vertex (or near the vertex) that happens to be an optimal solution of its relaxed LP. So solving its relaxed LP, which is considered “easy” with today’s algorithms for solving LPs, is equivalent to solving the BIP.

Solving hard BIPs to optimality, if possible, can be done through some form of implicit enumeration and/or cuts. Implicit enumeration is a strategy that allows all possible solutions to be considered (to guarantee optimality) without actually investigating individual ones explicitly (to improve its efficiency). The idea is to repeatedly and systematically identify a “bunch” of solutions that can be safely thrown away until an optimal solution is found, or the remaining solution set is small enough that a complete enumeration can be carried out quickly. 18

Branch-and-Bound

Branch-and-Bound is a standard and common strategy to perform implicit enumeration. Its success depends critically on how fast a good upper bound and a good lower bound of the optimal objective value can be generated. The faster they can be found, the more inferior (non-optimal) solutions can be thrown away. Two facts are useful in estimating upper bounds. First, finding a good upper bound is equivalent to finding a good feasible solution (of the original BIP). Second, as mentioned above, an integer solution (binary in our case) of an integer program (IP) can be discovered quickly if it lies at an optimal vertex of its relaxed LP. Thus finding an improved upper bound translates into a systematic reduction of the feasible set of the BIP so that a good feasible integer solution (of the original IP or BIP) is a solution of its relaxed LP. In the traditional

Branch-and-Bound, this systematic reduction of the feasible set is accomplished by selecting (branching) variables, setting them at some feasible integer values (e.g. 0 or 1 in our case), and then solving the relaxed LP in terms of the remaining variables. If at least one of the solutions is integer and if the relaxed optimal value is lower than the incumbent upper bound then a new (incumbent) upper bound is found. If none of the solutions is integer, then the relaxed LP objective value is stored as a possible candidate for a new improved lower bound. The efficiency of the Branch-and-Bound strategy also depends critically on the branching scheme and the bookkeeping scheme used. These are necessary to ensure that no thrown-away portions of the feasible set can possibly contain the optimal solution, and that no part of the feasible set is accidentally left out. 19

The traditional way of reducing the feasible set by selecting and fixing branching variables is still inadequate for tackling hard BIPs (or general IPs). Gomery [43] introduced another strategy based on the concept of “cuts”. A cut is a linear inequality added to the model that will cut away a portion of the feasible set. Of course we have to make sure that the cut-away portion has no possibility of containing the optimal solution.

The resulting feasible set is a smaller polyhedron that is known to contain the optimal solution. If a sufficient number of good cuts are added, then the optimal solution itself may lie at a vertex of the reduced polyhedron. It can then be found by solving the BIP with all those cuts. This is one of the bases of Gomery’s cutting plane method. The other basis is that Gomery’s cuts are generated from the fractional parts of non-integer solutions of the previously reduced LP. Relying on cuts alone, Gomery’s cutting plane method slows down considerable as the number of iterations grows. As the number of cuts increases, each new cut gets smaller until it becomes negligible. However, when combined with the Branch-and-Bound method, the resulting Branch-and-Cut method becomes much more effective in reducing the duality gap (difference between the upper bound and lower bound). Riding on the success of adding good cuts to reduce the duality gaps, several other techniques to generate valid inequalities or cuts have been proposed in recent years. Savelbergh [37] discusses more recent ideas for generating cuts and for preprocessing such as probing, lifting, lift-cover, constraint-pairing, surrogate constraints and so on. These ideas have been incorporated into modern LIP solvers such as CPLEX to make them among the most powerful state-of-the-art general-purpose LIP solvers today. Additional ideas and techniques to further improve the LP-representation of 20 general BIP problems using such ideas as Euclidean algorithm, optimality fixing and variable elimination with branch-and-cut approach are discussed in [42].

Knapsack Problem

There are special classes of BIP that are of great interest not only by themselves but also for the key roles that they play in developing solution algorithms of more general

BIPs. One of these special classes is a knapsack problem in which we try to find items to fit into a finite volume sack so as to maximize the total value of the items in the sack. A typical knapsack model is of the form:

n min ∑vxii i=1 n st.. ∑ axii≤ b i=1

xi ∈= {0,1}, in 1,..,

Key characteristics of the knapsack problem are that it has only one (inequality) constraint and all coefficients ai, vi, and b are all positive. Even with such a simple structure, the knapsack problem is still NP-hard. Nevertheless, it and many of its variants have been well studied and a number of efficient techniques and algorithms have been developed to solve the most interesting forms of knapsack problem quite efficiently.

Some of the ideas discussed earlier have also been customized to solve binary Knapsack problems. For example, [33-36] uses surrogate constraint and constraint pairing to help solve BIPs with 0-1 knapsack constraints. [37-41] reports beneficial use of preprocessing, probing and extended covers to solve binary knapsack problems. We will see later that binary knapsack problems play a key role in our development of solution methods to 21 solve the operation assignment of PCB assembly, as the machine capacity constraint (5) is indeed a 0-1 knapsack constraint.

Decomposition and Duality

When faced with a large and complex problem too difficult to solve in one integrated whole, the most natural strategy is to break it down into smaller and simpler subproblems, deal with each subproblem separately (which should now be easy to do), and then integrate and coordinate individual solutions with the goal of guiding the integrated solution toward optimum. This is the core principle of decomposition. When subproblems are uncoupled, the overall optimal solution can be achieved in one iteration by simply assembling individual subproblem solutions to form the overall problem solution. For coupled subproblems, the coordinating and integrating task is more difficult, and often requires an iterative procedure at the outer (upper level) in the form of solving a master problem. Solving a master problem means adjusting “coordinating” variables to guide solutions of subproblems toward the overall optimum. The challenges lie in how to formulate and solve the master problem in tandem with solving the subproblems. All these depend of course on the strategy we use to do the decomposition and the principle we plan to use to do the coordination.

One important concept that has now taken a firm holds in making decomposition and coordination not only possible but also efficient and practical is duality. To illustrate the concept, consider the following problem:

n min∑ fxii ( ) (18) i=1 n ni st.. ∑ gii ( x )≤∈⊆ b ; x i X i R i=1 22

Because of the coupling constraint, there is no natural way to decompose the problem at this point. By introducing the dual variable λ ≥ 0 and form the Lagrangian:

nn⎛⎞ minfii (x )+−λ g ii (xb ) xX∈ ∑∑⎜⎟ iiii==11⎝⎠ n

=+−min()fii (x )λg ii (xb ) λ xX∈ ∑ iii=1 the coupling (complicating) constraint is no longer a problem. Moreover if λ is fixed for the moment, then uncoupled subproblems emerge

Subproblem ifxgx : minii ( )+ λ ii ( ) (19) xX∈ ii

This suggests a useful decomposition strategy being sought. Solving each subproblem i should be relatively easy since it is much smaller, and has a simpler structure. In fact with a proper value of the coordinating variable λ, say λ*, the solutions of all subproblems

** * taken collectively as(x12 ,xx ,..,n ) will be optimal to the original problem (18), if it exists and is feasible. Indeed, if all functions fi and gi and the set Xi are convex, then by the strong duality theorem (SDT), both the existence of λ* ≥ 0 and the optimal solution

** * (x12 ,xx ,..,n ) to (18) are guaranteed. This is because by the SDT, there must exist λ* ≥ 0

* and then a solution to (19) xi ∈ Xi for each i such that

nn **⎛⎞ ∀≥ λλ0,∑∑fxii () +⎜⎟ gx ii () − b ii==11⎝⎠ nn n **⎛⎞ * * ≤+∑∑fii()x λ ⎜⎟g ii ()xb −= ∑f ii ()x ii==11⎝⎠ i = 1 nn⎛⎞ * ≤+∑∑fxii()λ ⎜⎟ gx ii () −∀∈ b x i X i ii==11⎝⎠

23

The first inequality is due to the fact that λ* maximizes the dual. The middle equality is

* due to SDT, and the last inequality is due to the fact that xi solves (19) for each i = 1,..,n.

The equality implies that

n **⎛⎞ λ ⎜⎟∑ gxii ( )−= b 0, ⎝⎠i=1

n ⎛⎞* Together with the first inequality above, we have λλ⎜⎟∑ gxii ( )− b≤∀≥ 0 0 ⎝⎠i=1 n * Hence, ∑ gxii( )−≤ b 0 i=1

** * Subsequently, we conclude that (x12 ,xx ,..,n ) is feasible for (18). Finally, the last

** * inequality and the feasibility of (x12 ,xx ,..,n ) imply that

nn n **⎛⎞ ∑∑fxii()≤+ fx ii ()λ ⎜⎟ ∑ gx ii () −∀∈ b x i X i ii==11⎝⎠ i = 1 nn * ⎛⎞ ≤∀∈∑∑fxii ( ), x i X i (since λ ⎜⎟ gx ii ( ) −≤ b 0) ii==11⎝⎠

** * Hence (x12 ,xx ,..,n ) is optimal to (18) as required to be shown.

The existence of a proper value of the coordinating variable λ* tells us that if we can find it then finding a solution to the original problem (18) can also be easily accomplished by solving (19) with λ = λ*. But how do we find λ*? It has to be done iteratively by

(k) (k) systematically adjusting λ toward λ* while at the same time guiding the solution xi of

* (k) (19) toward xi . One way is to update λ in such a way as its limiting point λ* maximizes the dual function:

n

hfxgx(λλ )= ∑()ii ( )+≥ ii ( ) over nonnegative λ 0 i=1 This is a coordination principle that we can use. A specific implementation of this idea is discussed next. 24

Lagrangian Relaxation and Subgradient Method (LR+S)

As discussed in [32, 43-50], many hard combinatorial optimization problems are made hard by the presence of a relatively small set of coupling or complicating constraints. It can be made easy if we can somehow temporarily remove the “coupling” effects. This brings us to the decomposition-coordination idea as discussed above. By dualizing coupling constraints (i.e. multiplying each of those constraints by a dual variable (penalty) and bringing them to the objective function), we temporarily make the resulting Lagrangian problem separable, and hence decomposable. The decomposition- coordination strategy as discussed earlier is now applicable. Because we temporarily relax the coupling constraints by dualizing them and form a Lagrangian function, we call the strategy Lagrangian Relaxation (LR). An optimal value of a Lagrangian relaxation problem can certainly serve as a lower bound to the optimal value of the original (primal) problem. Often, this is a tighter lower bound than that given by LP relaxation. So one can imagine replacing LP relaxation lower bounds in Branch-and-Bound by better LR relaxation bounds would produce better results faster. This will be one of the ideas to be pursued in this research. For now, we return to the second half of the picture—how to adjust the (coordinating) dual variables λ so as to converge to the right value λ* efficiently. Based on duality theory, we will try to update λ so as to maximize the dual function. The simplest updating scheme is to use the subgradient updating scheme:

(1)kk+ () ()k λλα=+× subgradientλ (Lagrangian evaluated at x )

In our example above, this would be:

n (1)kk+ ()⎛⎞ () k λλα=+× ⎜⎟∑ gii()xb − ⎝⎠i=1 25

Putting things together, the basic steps in Lagrangian Relaxation employing subgradients are:

1. Initialize the primal variables x(0) and dual variables λ(0) and define stopping criteria

2. Convert the problem into Lagrangian Relaxation formulation by dualizing all

coupling constraints.

3. At any iteration k with dual variables λ(k), solve the subproblems to form x(k)

4. Check for termination. If not, return to step 3.

We illustrate a complete (LR+S) process using GAP as an example. In GAP model (9)-

(12), either constraint (10) or (11) can serve as a coupling constraint depending on whether we would like to decompose via task j or via agent i. If we do the former, then the corresponding LR problem would be:

mn n⎛⎞ m (LR1 ) minimize∑∑ cij x ij+− ∑μ j⎜⎟ ∑ x ij 1 ij==11 j = 1⎝⎠ i = 1 n subject to:∑ rxij ij≤∀ b i , i j=1

xijij =∀0or1, ,

If we want to decompose based on agents, the corresponding LR would be

mn⎛⎞ n (LR ) minimize c x+−λ r x b 2 ∑∑ij ij ∑ i⎜⎟ ∑ ij ij i ij==11⎝⎠ j = 1 subject to:

m ∑ xjNij =∀∈1, i=1

xijij =∀0or1, ,

The first Lagrangian relaxation (LR1) becomes multiple knapsack problems in which the constraints consist of m zero-one knapsack problems. However, this formulation can be easily decomposed into m independent 0-1 knapsack problems, each of which can be 26 solved using any of the available methods for solving 0-1 knapsack. The second

Lagrangian relaxation (LR2) becomes a generalized upper bound problem (GUB) that can be further broken down to n independent simple assignment problems. Each of the subproblems can be solved by inspection, which is much easier to solve than a knapsack problem. A solution to subproblem i can be written as

⎧ ⎛⎞ ⎪1argminif j=+⎜⎟() cijλ i r ij xij = ⎨ ⎝⎠1≤≤jn ⎪ ⎩0 otherwise

So does this mean that LR2 is a more attractive approach to solving GAP via Lagrangian relaxation? The answer is not so obvious. If we are to use LR to replace LP relaxation in producing bounds in the Branch-and-Bound strategy, then LR1 will yield a tighter lower bound than LR2 at the price of being a little harder to solve. LR2 satisfies the integrality property (i.e. solving LR2 as an LP will always yield an integer solution [44, 46]), lower bounds produced by LR2 will not be better than bounds generated by LP relaxation. LR1 does not have the integrality property, so its lower bounds will be tighter than the LP relaxation bounds (hence better than LR2 bounds). Another possible advantage of LR1 over LR2 occurs in the following rare event. Since the optimal value of LR1 is higher than

LR2, its optimal solution has a better chance of being feasible for the primal problem

(GAP) as well. In this case, it will also be optimal to the primal GAP. However, this occurs very rarely. So in general, we need to find an alternative way to reach the optimal solution of GAP (or to establish a new incumbent upper bound through an improved feasible solution of GAP) from a solution of LR. For example, we may do so by perturbing around the solution to Lagrangian relaxation and perform a neighborhood search. 27

In summary, because of the cost saved in solving LR2 (over the time used to solve knapsack subproblems in LR1), we may still prefer LR2 to LR1. Since using LR2 cannot produce better lower bounds than LP relaxation, the choice between LR2 and LP relaxation depends on the size of the problems and how efficient we can solve LR2 to optimality. Solving an LR problem usually requires an iterative process such as a subgradient method, whose rate of convergence, if it converges at all, is linear and slow.

So in general solving LR2 to optimality would be very slow and time consuming, if it is possible at all. Thus if LP relaxation does not produce too large an LP problem, then it should be used since efficient LP solvers are readily available, and more importantly it will produce sharper bounds than partially solved LR2. On the other hand, if the resulting

LP relaxation subproblem is large, then Lagrangian relaxation LR2 should be used.

()k There are three popular approaches for updating the multipliers--- μ j in LR1 or

()k λi in LR2: (1) the subgradient method, (2) various versions of the simplex method implemented using column generation techniques [44, 46, 54], and (3) a multiplier adjustment method. The most popular of the three and the one most relevant to our work is the subgradient method as reviewed in [43, 47, 51-53]. A typical subgradient-based updating scheme if LR1 is used is:

m (1)kk+ ()⎛⎞ () k μμαjj=+⎜⎟∑ x ij −1 ⎝⎠i=1 whereas if LR2 is used:

n (1)kk+ ()⎛⎞ () k λλαii=+⎜⎟∑ rx ijiji − b ⎝⎠j=1

28

Linear Programming

In Branch-and-Bound, if LP relaxation is used, then we have to solve LPs. If the

LP subproblem is not too large, then we can use the standard simplex method which is a combinatorial search vertex hopping procedure. It hops from one vertex of the polyhedron (defining the feasible set of the LP) to a better neighboring vertex until no better adjacent vertex can be found. Because of convexity, and if an appropriate cycling prevention procedure is installed, the simplex method will always find an optimal vertex in a finite number of steps. Nevertheless, the simplex method is not a polynomial-time algorithm. It is possible that the number of vertices visited by the simplex method could be very large (with exponential increase) for a very large scale LP. This will wipe out the simplex’s key advantage over its competitors—namely a very small per-hop cost

(equaling the cost of only one pivot operations). For a very large scale LP, the current method of choice is an interior point method, which tries to approach an optimal vertex along a path strictly interior to the polyhedron and avoids wasting time at the boundary.

While the cost per iteration can be very high, the number of iterations can be many magnitudes smaller than the simplex method. Thus the overall cost (cost per iteration× no. of iterations or hops) for the interior point method could be advantageously smaller than that of the simplex method. Accordingly for a very large LP, we will use an interior point method, and for smaller LPs we will still use the simplex method. The literature on

LP is quite large both for the simplex method and interior point methods. [43, 54-57, 61] provide a very comprehensive review and discussion on the simplex method and its many powerful variants (e.g. primal-dual method). Similar comprehensive reviews and 29 discussions on interior-point based methods for very large problems can be found in [54,

56].

Heuristics

Many heuristics are useful for finding good solutions to intractable combinatorial problems often in a very fast time. However, solutions found by heuristics are often suboptimal. For the operation assignment of the PCB assembly process, greedy heuristics as described in [4, 61] are indeed very popular among PCB manufacturers. We will describe these heuristics in the next section. For general combinatorial optimization problems, local search heuristics such as Tabu search have drawn considerable interest from researchers. [59-60] give current developments, implementation and applications of

Tabu search in solving hard combinatorial optimization problems. Other local search heuristics described in [58, 61] attempt to find good feasible solutions in the neighborhoods or solution space of the current solution set. In this work we will make use of many of these heuristics so we will describe some of them below, and some more in due time.

The goal of a combinatorial optimization problem is to find a solution s* ∈ S to optimize an objective c(s). That is we wish to solve

cs( * ) = optimum cs() s ∈S where S is the set of feasible solutions, c is an objective function. Now let N: S→ 2s be a neighborhood function defined for each solution s ∈ S, such that N(s) is a subset of neighbors of s, or solution points that are close to s. A local search algorithm called the iterative improvement algorithm follows the following steps: 30

1. Compute an initial feasible solution s ∈ S

2. while N(s) contains a better solution than s do {

3. Choose a solution s′ ∈ N(s) with better value c(s′) than c(s).

4. Set s ← s′ }

5. Output s.

A solution s computed by the algorithm has the best possible value among all the solutions in its neighborhood N(s)

cs( ) = optimum cs( ′) s′ ∈Ns( )

Stochastic local search method is the most successful techniques for solving combinatorial problems. The following components, search space, solution set, neighborhood relation, memory states, initialisation function, step function, termination predicate have to be specified. A general outline of a stochastic local search method is

• Determine an initial search state

• While termination criterion is not satisfied:

perform a stochastic search step

if necessary, update incumbent solution

• Return incumbent solution or report failure

Stochastic local searches are used on problems with a large number of local optima.

Typical search techniques try to maintain some monotone property (e.g. improving

objective function) to ensure convergence. But such action often leads to the process

being trapped at local optimum. To break out of these traps, “degrading steps” have to

be allowed occasionally and when appropriate. Stochastic searches allow exactly that. It 31

is the simplest way to allow degrading search steps by permitting a random move to a

neighboring search position. In a randomized iterative improvement called iterated local

search, a neighbor is chosen uniformly at random. A perturbation procedure introduces a

modification to a given candidate solution set to allow the search process to escape a

local optimal. Finally, an acceptance criterion is used to decide whether the search

should be continued from a newly found local optimal. The iterated local search is

outlined as follows:

• Determine an initial candidate solution s

• Perform subsidiary local search on s

• While termination criterion is not satisfied:

r: = s

perform perturbation on s

perform subsidiary local search on s

based-on acceptance criterion keep s or revert to s:= r

2.4 Existing Algorithms for Operation Assignment of PCB Assembly

There is not so much research work being done on solving the operation assignment of PCB assembly problem more efficiently. This is because most manufacturers are satisfied with using heuristics. Even though the solutions obtained may be far from optimal, they are simple to use and they can do the job quickly. Two pieces of work done in [4, 11] appear to be the most recent attempts to solve simplified versions of the operation assignment PCB assembly model to optimality using Lagrangian relaxation,

Branch-and-Bound and some heuristics. Cases with single and multiple machines were 32 considered. While [4] also considered cases with split boards, (11) did not. As it turns out the only methods in [4] that are capable of handling split boards and multiple automatic/semiautomatic machines are just greedy heuristics which we will now briefly describe.

Greedy board heuristics for Single Automatic Machine

The rationale behind this approach is that: a) because of the expected low level of component commonality, it may be better to assign boards entirely to the automatic processes or the manual process, rather than splitting them, and b) consideration of existing component commonality may yield cost-saving combinations of boards assigned to the automatic process.

The Greedy Board algorithm starts by assignment all board types to the manual process, and successively switching a board type with the smallest incremental time

(hence incremental cost) per board type to the automatic machine until there is no more room to add any more component type to the automatic machine. And to simplify the implementation, it is assumed that the cost of inserting a component type to each machine is the same for all component type, and the cost of setting up a machine for a board type is the same for all board types. So, symbolically, if we use i =1 for automatic machine and i =2 for manual machine, then the per unit cost of insertion component j on process i, cij, becomes simply ci for all j and the per unit setup cost for board k on process i, sik, becomes si for all k.

Step 1: Initialization:

K • Let S = ∅, T = {1,…, K}, and J = ∑[]v jk c2 + s2 k=1 33

Step 2: ‘Greedy’ Board loading:

⎡ ⎤ ⎢∑ rjk ⎥ • ⎣ j∉S ⎦ Calculate γ k = for all k ∈ T ′ d k

where

⎧ ⎫ ′ T = ⎨k ∈T : ∑ rjk ≤ N1 − S ⎬ ⎩ j∉S ⎭

• Find m = arg min[γ k ] kT∈ ′

• Let T = T – m, S = S + {j: rjm = 1}, J = J + [vjm(c1 –c2) + s1 – s2]

Step 3: Post-Processing:

• IfT ′ ≠ φ , return to Step 2. Otherwise,

(i) For all j ∉ S: set x2jk = rjk, x1jk = 0, for all k

(ii) For all k ∉ T : set y1k = 1, y2k = 0, and x2jk = 0, x1jk = rjk, for all j

(iii) For all k ∈T : set y2k = 1, and if

s1 + ∑v jk c1 > ∑v jk c2 j∈S j∈S

Then set y1k = 1, x1jk = rjk ∀j, x2jk = 0, ∀j, and

⎡ ⎤ J = J + ⎢∑v jk ()c2 − c1 ⎥ − s1 ⎣ j∈S ⎦

Otherwise, set y1k = 0 and x1jk = 0, x2jk = rjk, ∀ j ∈ S.

Stop.

Step 1 assigns all boards to the manual process. In Step 2, the incremental number of new component slots per board produced (γk) is calculated for each board whose incremental assignment to the machine will not violate the slot capacity constraint. Then the board 34

with the minimum value γk is switched to the automatic machine. This is equivalent to greedily maximizing incremental number of boards produced per an additional slot used.

The process continues until no more boards can be switched to the automatic machine.

Step 3 simply completes the assignment process by converting the resulting from step 2 to the actual values of decision variables, xijk, yjk, zij. For computational complexity, Step 2 of the Greedy Board is used at most K times leading to K2 log (K) in flops. Thus an upper bound on the computation complexity of the Greedy Board algorithm is K3 log(K).

Greedy Board Algorithm with Multiple Automatic Machines

This is mainly the algorithm for a single automatic machine with modifications for multiple automatic machines. The modifications can be done in two versions depending on whether or not a component type can be assigned to multiple machines.

Version 1 does not allow assigning any components to machine i+1 if they have been previously assigned to machine i, and Version 2 does. However, the only previously assigned components assignable to the next machine(s) are those that are not involved in previously assigned board types that have been completely processed by previously assigned machines. Typical steps of the algorithm can be stated as follows:

Step1: Initialization:

• Let i = 1, SK = {1,…, K}, SJ = {1,…, J}

Step2: Greedy Board Assignment:

• Apply the Greedy Board Algorithm for a Single Automatic Machine to

Automatic Machine i, considering the board set SK and component set SJ. 35

Step3: Updating:

(Version 1)

• Remove from SK those boards that are completely processed on machine i, and

remove from the set SJ those components assigned to machine i.

• Let i= i + 1

• If i < I, return to step 2.

(Version 2)

• Remove from SK those boards that are completely processed on machine i, and

remove from the set SJ those components that are associated with boards that can

be completely processed on machine i.

• Let i= i + 1

• If i < I, return to step 2.

Step4: Post-Processing:

• Given the assignment of components to machines

• For each board not completely processed on a single machine, determine the least

cost way to produce the board

The Greedy Board algorithm will be adapted to form a part of the algorithm to be proposed in this research. The other greedy-type heuristics discussed in [4] is the Stingy

Component algorithm. Even though we will not use it in our research we will briefly review them and other methods used in [4] below to close out this chapter.

36

Stingy Component heuristics for Single Automatic Machine

This algorithm focuses on the entire set of components and starts by assigning all components to an automatic process. If the number of components J is less than or equal to the machine’s capacity, there is nothing to do further. On the other hand, if J exceeds the machine’s capacity, components are sequentially removed based on the “smallest cost increase to the manual machine” criterion, until the machine’s capacity is satisfied. Any components not assigned to the automatic process will of course be assigned to the manual process. Some components may be assigned to both processes in case of a lower cost can be found. By considering incremental cost of removing each component, the less frequently used components will never be assigned to the automatic machine in a cost- minimizing solution. On the order hand, the most frequently used components will always be assigned to the automatic machine. So, again symbolically, we use i =1 for automatic machine and i =2 for manual machine. The per-unit cost of insertion component j on process i, is assumed to be the same for all j, so cij, becomes simply ci for all j. Also, the per-unit setup cost for board k on process i is assumed to be the same for all k. So sik, becomes si for all k. The stingy component algorithm can be executed as follows:

Step 1: Initialization:

K • Let S = {1,…., J) and δk = 1 for all k , and J = ∑[]v jk c1 + s1 k =1

• If ⎥ S⎪≤ N1, Stop.

Step 2: ‘Stingy’ component removal: 37

K • Calculate Δ j = ∑[]v jk ()c2 − c1 + rjkδ k s2 for all j ∈ S. k =1

• Find l = arg min[Δ j ] j∈S

• Let S = S – l, J = J + Δl, and for k s.t. δkrlk = 1, set δk = 0.

Step 3: Post-Processing:

• If ⎥ S⎪> N1, return to Step 2. Otherwise

(i) For all j ∉ S: set x2jk = rjk, x1jk = 0, for all k

(ii) For all k s.t. δk = 1: set y1k = 1, y2k = 0, and x2jk = 0, x1jk = rjk, for all j

(iii) For all k s.t. δk = 0: set y2k = 1, and if

s1 + ∑v jk c1 > ∑v jk c2 j∈S j∈S

Then set y1k = 0, x1jk = 0 ∀j, x2jk = rjk ∀j, and

⎡ ⎤ J = J + ⎢∑v jk ()c2 − c1 ⎥ − s1 ⎣ j∈S ⎦

Otherwise, set y1k = 1 and x1jk = rjk, x2jk = 0, ∀ j ∈ S.

Stop.

Step 1 assigns all components (1,…, J) to the automatic process. Step 2 removes individual components from the automatic process to the manual process by using the incremental cost (Δj) until the bin of the machine’s capacity is reached. The objective function (total setup cost and total processing cost) is also updated. Step 3 updates the decision variables by checking all boards (δk = 1) which are completely processed by the automatic process and the remaining boards (δk = 1) to be processed by the manual process. The decision variables and the objective function are updated accordingly. For 38 computational complexity, each time Step 2 is reached, at most J sums are calculated, and one sort is performed, leading to a maximum of J2 log (J) calculations at each step.

Step 2 is reached J- N1 times, so an upper limit on the computational complexity for the

Stingy Component algorithm is J3 log (J).

Stingy Component Algorithm with Multiple Automatic Machines

By applying the Stingy Component algorithm for a single machine, two versions of Stingy Component algorithms for multiple machines can be developed. Version 1 does not allow assignment of any component to machine i+1 if it has already been assigned to machine i, while Version 2 does.

Step1: Initialization:

• Let i = 1, SK = {1,…, K}, SJ = {1,…, J}

Step2: Stingy Component Assignment:

• Apply the Stingy Component Algorithm to machine i, considering the board set

SK and component set SJ.

Step3: Updating: (Version 1)

• Remove from SK those boards that are completely processed on machine i, and

remove from set SJ those components assigned to machine i.

• Let i= i + 1

• If i < I, return to step 2.

Step3: Updating: (Version 2) 39

• Remove from SK those boards that are completely processed on machine i, and

remove from set SJ those components that are assigned only with boards that can

be completely processed on machine i.

• Let i= i + 1

• If i < I, return to step 2.

Step4: Post-Processing:

• Given the assignment of components to machines

• For each board not completely processed on a single machine, determine the least

cost way to produce the board

The Stingy Component algorithm will perform well when setup costs are low. In practical situations, setup costs are indeed high relative to insertion costs. Therefore, this heuristics is not going to be pursued further this research. Now we will review relevant methods discussed in [11]:

Lagrangian Relaxation Heuristic with Single Machine

As describe in [11], the case with a single automatic machine can be solved by

Branch-and-Bound combined Lagrangian relaxation. First model (1)-(8) is simplified by eliminating variables with respect to the manual process this gives: the following model for one automated process:

BIP1

minimize ∑∑(c1 j − c2 j )v jk x jk + ∑∑ sik d k yik +∑∑c2 j v jk jk ik jk subject to 40

y1k ≥ x jk ;∀j,k ∋ rjk = 1,

y2k ≥ 1− x jk ;∀j,k ∋ rjk = 1,

z j ≥ x jk ;∀j,k ∋ rjk = 1,

∑ z j ≤ N, (BIP1) j

x jk ∈{}0,1 ;∀j,k ∋ rjk = 1,

yik ∈{}0,1 ;∀j,k,

z j ∈{}0,1 ;∀j

BIP1 allows board splitting. If no board splitting is allowed, then further simplification can be made eliminating variables xijk using y1k = yk, y2k = 1 – yk, z1k = zk, z2k = 1-zk, and xijk = 1 if and only if yik =zij =1 (because yik an zij are completely specified with the problem solution). This leads to the following model for a single machine with no board splitting:

BIP2:

⎡ ⎛ ⎞ ⎤ ⎛ ⎞ minimize ⎜ s − s d + c − c v ⎟y + ⎜ s d + c v ⎟ ⎢∑∑⎜()1 2 k ()1 2 jk ⎟ k ⎥ ∑∑⎜ 2 k 2 jk ⎟ ⎣⎢ kj⎝ ⎠ ⎦⎥ kj⎝ ⎠ subject to

z j ≥ yk;∀j,k ∋ rjk = 1,

∑ z j ≤ N, j (BIP2)

yk ∈{}0,1 ;∀k,

z j ∈{}0,1 ;∀j.

After adding the necessary slack/surplus variables, BIP1 and BIP2 can be rewritten more compactly as MP1:

Minimize cTw

Subject to

Mw = b g(w) ≤ N (MP1) 41

w ∈ {0,1} where

c = the coefficient vector of insertion cost and setup cost

w = (x y z), is the vector of decision variables,

M is the coefficient matrix for all constraints except the capacity constraint

g(w) is function of the capacity constraints.

N = total number of different types of components on automatic process

We note that M is totally unimodular (all its square submatrices have determinant +1 or

-1). Thus if the capacity constraint in MP1 is “relaxed” , it is well known that the LP relaxation of LP1 will always produce integer solutions, which will in turn be the optimal solution of the original MP1 as well. So the problem is how to handle the relaxation of the capacity constraint of MP1 in the most efficient way. [11] uses Lagrangian relaxation with Branch-and-Bound. This leads to the following model: MP2:

θ(λ) = minimize cTw + λ( g(w) - N)

subject to

Mw = b (MP2) 0 ≤ w ≤ 1

If (i) w* is a feasible point of MP2 (ii) for some λ* ≥ 0, λ*( g(w*) - N) = 0, and (iii) g(w*) ≤ N (this along with (i) imply that w* is feasible for MP1), then by LP duality theory, w* is an optimal solution of MP1, and λ* maximizes the dual function θ(λ).

So if we can find the right value of λ*, solving MP2 until the above complementary slackness and primal feasibility conditions are satisfied, MP1 would have been solved.

The process has to be done iteratively using Branch-and-Bound. The branching variables are those components of w that correspond to yik . At each node corresponding to some 42

fixed yik, λ* is estimated by solving the LP relaxation of MP1 modified for that node and by using the optimal dual variable (multiplier) of the capacity constraint as an estimate of

λ*. This is then used to form MP2 for that node to obtain the values of the remaining components of w*. Conditions (ii) and (iii) are checked. If satisfied, the node is fathomed and appropriate bounds (upper and lower) are updated and backtracking is performed until the Branch-and-Bound process is completed. If either of (ii) or (iii) or both is violated, then the branching and bounding continues below the current node. A complete process is summarized below:

Step1: Initialization:

• Solve LP relaxation of MP1,

• If the solution is integral, then stop (this is optimal solution of MP1).

• If the solution is not integral, then the solution is a lower bound in MP1.

Step2: Lagrangian Relaxation:

• Set λ equal to the shadow price of the capacity constraints in LP relaxation of

MP1 in step 1

• Solve MP2.

• If g(w) = N then stop, (this is optimal solution of MP1).

• If g(w) > N then adjust λ by using sensitivity analysis until g(w) < or = N

• If g(w) < N then go to step3.

Step3: Improving the upper bound:

• Compute,

J a = N − z (the number of empty slots remaining on the machine), ∑ j=1 j

43

J α k = ∑[]rjk − z j (the number of components of board k adds to the machine), j=1

Ω = {}k 1 ≤ α k ≤ a (the set of all boards that can be moved to the machine),

and ψ = {}k y1k = y2k = 1 (the set of all boards can be split to manual machine)

• In case of a board can be split,

i) If a ≠ 0 and Ω ≠ ∅ compute the cost saving in each board, for k∈Ω,

J γ k = s2k d k y2k − s1k d k (1− y1k ) + ∑[]()()c2 j − c1 j v jk 1− x jk and compute j=1

* k = arg k∈Ω max(γ k α k ) and then add board k* and all its of

components to the automatic process; recomputed values a, and Ω,

ii) If a ≠ 0 and Ω = ∅, and ψ = ∅ then stop, otherwise compute

ϕ = c − c v 1− x and then add a component j that j ∑k∈ψ ( 1 j 2 j ) jk ( jk )

highest values to the machine; recomputed value a,

iii) If a = 0 and Ω = ∅ , then stop.

• Case of a board cannot be split,

i) If a ≠ 0 and Ω ≠ ∅ compute the cost saving in each board, for k∈Ω,

J γ = s − s d + c − c v 1− y and compute k [()2k 1k k ∑ j=1 ( 2 j 1 j ) jk ]( k )

* k = arg k∈Ω max(γ k α k ) and then add board k* and all its of components

to the automatic process; recomputed values a, and Ω,

ii) If a = 0 or Ω = ∅, then stop.

Step4: Apply Branch and Bound to find the optimal solution: 44

• Branch on the yik variables(0 and 1) which are nonintegral in the solution of MP3

and Bound by using step1 through 3

Note that without Step 4, the process is just a straight Lagrangian relaxation. As noted in

[11] that, applying only the Lagrangian relaxation heuristic, the solution obtained usually is optimal (to MP1) or very near optimal (error less than 0.03% for a large scale problem). Thus a full blown Branch-and-Bound execution does not seem to be necessary.

Lagrangian Relaxation Heuristic with No Board Splitting and Multiple Machines

With no board splitting allowed, only one machine/process per board is required.

According variables xijk can be eliminated and (1)-(8) can be simplified as shown below:

MBIP:

⎡ I −1 ⎛ ⎞ ⎤ ⎛ ⎞ minimize ⎜ s − s d + c − c v ⎟y + ⎜ s d + c v ⎟ ⎢∑ ∑∑⎜()ik Ik k ()ij Ij jk ⎟ ik ⎥ ∑∑⎜ Ik k Ij jk ⎟ ⎣⎢ i=1 kj⎝ ⎠ ⎦⎥ kj⎝ ⎠ subject to

I −1 ∑ yik ≤ 1;∀j,k ∋ rjk = 1, i=1

zij ≥ yik ;i = 1,2,..., I −1;∀j,k ∋ rjk = 1,

∑ zij ≤ N i ;i = 1,2,..., I −1, (MBIP) j

yik ∈{}0,1 ;i = 1,2,..., I −1;∀k,

zij ∈{}0,1 ;i = 1,2,..., I −1;∀j.

A “fastest machine” heuristics has been developed to work with Lagrangian relaxation and the “single-machine, no board splitting algorithm” above to sequentially assign boards to machines in order to solve MBIP. A summary of the algorithm is shown below:

45

Step1: Initialization:

• κ = 1,…, K (the set of boards)

• ξ = 1,…, I ( the set of machines)

Step2: Lagrangian relaxation heuristic:

⎛ ⎛ ⎞⎞ • Compute the fastest machine i* = arg min⎜ ⎜ s + n c ⎟⎟ (defined as that i∈ξ ⎜∑∑⎜ ik jk ij ⎟⎟ ⎝ kj∈κ ⎝ ⎠⎠

machine which can produce all boards in the set κ the fastest), with broken

arbitrarily.

• Apply the previous single machine algorithm, using machine i* as the single

machine, and considering all boards in the set κ.

Step3: Removing and Updating:

• Remove from the set κ all boards assigned to machine i*

• Update ξ = ξ - i*

• If ξ = φ then apply the previously single machine algorithm, using machine i* as

the single machine, and considering all boards in the set κ. Otherwise stop, and

assign all boards in κ to the manual process.

The “fastest machine” is defined as the machine that could produce the remaining set of

(unassigned) boards the fastest. The fastest machine heuristics assigns as many boards to the fastest machine based on the “single-machine” algorithm above. After the fastest machine is assigned and all boards assigned to that machine removed, the process is repeated with the remaining set of machines and remaining set of boards. The process continues until all automatic machines have been fully assigned. The remaining

(unassigned) boards are assigned to the manual process. Used in the Branch and Bound 46 process, the above solution will provide upper bound at the corresponding node and a lower bound is given by solving the LP relaxation of the corresponding MBIP.

Branching can be done based using non-integer components of the optimal solution of the

LP relaxed MBIP.

The algorithms described so far do not allow board splitting. The today’s PCB industry, board splitting can lead to a substantial saving, and it is worth investigating how to solve (1)-(8) with board splitting allowed. This is what we will do next. 47

3. Solution Algorithms for Multiple Machines with Board Splitting

In this chapter we begin to put together an algorithm to solve the most general case of operation assignments for PCB assembly—namely the case with multiple automatic/semiautomatic machines with possible board splitting allowed. Aiming to achieve the most efficient and most effective procedure to handle large industrial-level cases, we will exploit special structures of the problem and propose what we believe to be the best way to make use of those special structures. Decomposition, Lagrangian relaxation, Greedy Board heuristics, and Problem Space Search are among many ideas that we will use to customize and integrate to solve our problem.

3.1 The Model Revisited

We begin by re-stating the basic mathematical model for operation assignments for PCB assembly and its various versions more precisely.

Consider a PCB assembly process consisting of I insertion machines used to produce PCB to fulfill a production order of K boards types containing the total of J component types. The order for board type k is dk boards for the order. The first I -1 machines are automatic or semiautomatic insertion machines, and these may be identical or unidentical. Each of these automatic/semiautomatic machines has a limited number of

th slots Ni, thereby assignable to handle at most Ni component types at a time. The I machine is the manual process, which can handle all component types. Each of the PCB board types to be produced contains a specific set of component types as indicated by rjk.

That is, rjk = 1 if component type j is required on board type k and equals 0 otherwise.

Thus the set of component types to be inserted on board k is Jk = {j| rjk = 1}. The total set 48

of component types required for the whole production order is ∪ Jk and the total number kK∈

of different component types, the cardinality of ∪ Jk , is J . kK∈

All operation assignments are associated with two types of costs, namely operation (insertion) cost and setup cost. These costs are, respectively, directly related to the amount of times required to do the insertion and the setup. If board type k is assigned to machine i, then machine i must be set up for board type k incurring the setup cost sik.

This setup cost is incurred for each individual board that is set up to be processed by the machine, not just for each board type. The cost of inserting each unit of component type j on process i is given as cij. The total production cost is the sum of the total insertion cost and the total setup cost.

As mentioned above, the expected demand for board type k during the planning period is assumed to be known and equal to dk boards. Board type k requires njk units of component type j to be inserted on any insertion machine. Thus the total expected number of units of component type j required for board type k for the entire planning horizon is vjk = njkdk.

We wish to find an assignment schedule to assign component types to each automatic/ semiautomatic machine, assign each board type to machine(s), and assign specific components on a board type for final insertion by a specific machine that will minimize the total production cost. The assignment schedule sought is specified by the following decision variables:

xijk = 1 if component j of board k is assigned to machine i; 0 otherwise

yik = 1 if board k is setup on machine i; 0 otherwise

zij = 1 if component j is assigned to machine i; 0 otherwise 49

Thus the total cost we wish to minimize is:

Total production cost = Total insertion cost + Total setup cost

= ∑∑∑cvij jk x ijk+ ∑∑ s ik d k y ik ijk ik

The assignment schedule sought of course has to satisfy the board requirements, the appropriate physical/logical constraints, and the machine capacity constraints:

For board type k ∈ K= {1,..,K}, component type j ∈ J = {1,..,J}, and machine i∈I =

{1,..,I}:

Board requirements:

• Each component j required on board k has to be assigned exactly once to a machine:

∑ xrijk=∀∈∈ jk ; jkJ, K i∈I

Physical constraints:

• Component j on board k cannot be assigned for insertion by machine i unless board

k is assigned machine i first

xyijk≤∀∈∈∈ ik ijkI, J, K,

• Component j on board k cannot be assigned for insertion by machine i unless a slot

on machine i is assigned to handle component type j first

xzijk≤∀∈∈∈ ij ijkI, J, K,

Capacity constraints:

• The number of component types assigned to automatic machine i cannot exceed the

number of slots available on automatic machine i:

∑ zNij≤∀∈ i i{1, . . , I -1 } j∈J 50

These along with the 0-1 requirements of each decision variable define the constraint set of our model. For convenient reference, we can now summarize the entire model—to be called PCB1--and the lists of decision variables and parameters as follows:

PCB1:

min f (xyz , , ) = ∑∑∑cvij jk x ijk+ ∑∑ s ik d k y ik (1) ijk∈∈∈I J K ik ∈ IK ∈

subject to

∑ xrijk=∀∈∈ jk jk J, K () 2 i∈J

yxik≥∀∈∈∈ ijk i I, j J, k K () 3

zxij≥∀∈∈ ijk i I, jJ,k ∈ K () 4

∑ zNij≤=− i i 1,2,..., I 1 () 5 j∈ J

xyzijk∈∈∈{}0,1 , ik {} 0,1 , ij {} 0,1 ∀∈∈∈ ijk I, J, K ()6

Indices: i = process (i = 1,..,I-1: automatic machine; i = I: manual); I = {1,..,I} j = components j ∈ J ={1,.., J} k = boards k ∈ K= {1,..,K} Costs:

cij = cost of inserting one unit of component type j by process i

sik = one-time cost to setup board type k on process i Production Requirements:

dk = expected number of units of board type k during the planning horizon

rjk = 1 if component j is used in board k; 0 otherwise

njk = number of units of component type j used in board k

vjk = expected number of units of component type j used on board type k during

the planning horizon (= dknjk) Capacity Constraint:

Ni = number of different types of components that can be assigned to process i Decision Variables: 51

xijk = 1 if component j of board k is assigned to process i; 0 otherwise

yik = 1 if board k is setup on process i; 0 otherwise

zij = 1 if component j is assigned to process i; 0 otherwise

We note that PCB1 minimizes the total production cost, and it makes no attempt to balance the workload across machines. Hence there is no guarantee that the makespan will be minimized, hence the throughput may be compromised. Also, the cost of refilling component bins when they become empty is not considered. Instead we assume that the bins are refilled at the end of each day (the bins are large enough to hold at least a day’s supply of any given component). If the refilling cost is significant, it can easily be incorporated within the framework of the model by adding a fraction of the bin refilling cost to the insertion cost for each component. We also assume that no boards need to be reworked (ie. all boards are completely assembled with no defects). If a board requires significant , this can be handled by adding the amount of rework necessary for board type k to the expected demand dk (assuming the expected value of the proportion of defective boards is known in advance).

If board splitting is not allowed:

PCB1 as given by (1)-(6) allows each board to be loaded on different machines

(board splitted) to complete the insertion of components on the board, if it is beneficial to do so. This is allowed by an explicit inclusion of variables xijk. As illustrated at the end of

Chapter 2, if no board splitting is allowed, then we can eliminate the variables xijk by summing the inequality (3) over i and use Equation (2) to complete the modification.

This means, in case of no board splitting allowed, (2) and (3) will be replaced by: 52

I ∑ yjkrik≥∀∈∈∋=1J,K1 jk i=1

To eliminate xijk from the objective function, by virtue of (3), we can replace (1) by;

⎡⎤IK⎛⎞ J Minimize ⎢⎥∑∑⎜⎟ ∑cvij jk+ s ik d k y ik ⎣⎦⎢⎥ik==11⎝⎠ j = 1

Not only is the number of variables markedly reduced, the number of constraints will also be greatly reduced. In this research, we will only consider the case with board splitting i.e. PCB1 as it stands.

Commonality Ratio and Problem Size

If the matrix {rjk} is full indicating that each component type j is required in each and every board type k, then we would require the full IJK values of variable xijk in the model. Thus the total number of variables in PCB1 is given by IJK + IK + IJ variables and the total number of constraints is 2IJK + JK + (I-1). For instance, if we have, for a typical PCB factory, 1,000 components types, 100 types of boards, and five automatic/semiautomatic insertion machine (I = 6), the problem size is 60,660 variables and 1,300,005 constraints. But, in a realistic problem, different boards use a lot of common components. In fact, the off-diagonal elements of the matrix {rjk} are mostly

∑∑ rjk zero signifying that the ratio γ = jk is typically small. This ratio is the average J number of boards that share a common component type, the smaller the ratio the less commonality of boards in terms of shared component types. And hence less number of variable xijk will be required in the model. Indeed, for a problem with commonality ratioγ, the total number of variables required in the model is γIJ + IK + IJ variables and the 53 number of constraints is 2γIJ + JK + (I-1). The problem size of the previous example with γ=1.25 is reduced to 14,100 variables and 115,005 constraints, which is much smaller than the full model with γ= K = 100. Nevertheless, since this is a combinatorial

(zero-one) problem, the size is still too large for most general purpose IP optimizers.

This is precisely why we would like to find a more efficient method that can find a near optimal solution, if not optimal, in a reasonable time.

3.2 The Proposed Solution Strategy:

Upon a close examination, the constraint set is decomposable with respect to k and almost decomposable with respect to j except for the complicating (coupling) constraints (5). Constraints (2) is a straight assignment constraint decomposable with respect to j and k and easy to handle for each (j, k) once decomposed. It appears natural therefore to begin with decomposition. To account for the coupling constraints (5) and in some sense constraints (3) and (4), we use Lagrangian duality and Lagrangian relaxation to bring constraints (5) along with (3) and (4) to the objective function using dual variables or nonnegative multipliers βj ≥ 0, λijk ≥ 0, μijk ≥ 0 respectively.

So with

βi ≥=0multiplier of ∑ zN ≤ i = 1,.., I − 1 j ij i

λijk≥ 0 = multiplier of x ijk−≤yijk ik 0 ∀ , ,

μijk≥=0 multiplier of xz ijk −≤ij 0 ∀ ijk,, 54

The resulting Lagrangian function is: PCB2: θ(λ, μ, β)

IJK IK IJK θ()minλ,μ,β = ∑∑∑cvij jk x ijk++ ∑∑ s ik dy k ik ∑∑∑λ ijk() x ijk − y ik ijk===111 ik == 11 ijk === 111 IJK I−1 J +−+−∑∑∑μβijk (xz ijk ij ) ∑ i ( ∑ zN ij i ) ijk===111 i = 1 j = 1 IJK IK⎛⎞ IJ⎛⎞ K =∑∑∑ (cv ++λμ)()x +−+−+ ∑∑sd ∑λμβ y ∑∑ ∑ z ij jk ijk ijk ijk⎜⎟ ik k ijk ik⎜⎟ ijk i ij ijk===111 ik == 11⎝⎠ j ij == 11⎝⎠ k = 1 ωijk τik ϕij IJK IK IJ = ∑∑∑ωτϕijkxyz ijk++ ∑∑ ik ik ∑∑ ij ij (7) ijk===111 ik == 11 ij == 11 subject to I ∑ xrijk=∀∈∈ jk jk J, K () 8 i=1

xyzijk∈∈∈∀∈∈∈{} 0,1 ; ik {} 0,1 ; ij {} 0,1 ijk I, J, K () 9 So now, if (λ, μ, β) is fixed at (λ(n), μ(n), β(n)) then PCB2 can be further decomposed into x-subproblem, y-subproblem, and z-subproblem, each of which can be further decomposed into sub-subproblems that can be easily solved by inspection:

x-subproblem: IJK ()nnnn () () () min ∑∑∑ωijkxcvijk where ωλμ ijk=++ ij jk ijk ijk ijk===111 I

st. . ∑ xijk=∀∈∈ r jk j J, k K i=1

xijkijk ∈∀∈∈∈{}0,1 I, J, K This can be further decomposed for each j ∈ J, k ∈ K as I ()n min ∑ωijkx ijk i=1 I st. . ∑ xijk= r jk i=1

xiijk ∈∀∈{}0,1 I And the corresponding solution is:

⎧ ˆ ()n ()n ⎪1 for i = arg min ωijk For (jk , ) such that r== 1, x ( i∈I ()) (10) jk ijkˆ ⎨ ⎩⎪0 for all ii≠ ˆ For (jk , ) such that r== 0, x()n 0 for all i = 1,..., I jk ijkˆ 55

Likewise, the y-subproblem is:

IK J ()nnn () () min ∑∑τikysdik where τλ ijk=− ik k ∑ ijk ik==11 j = 1

st. . yik ∈∀∈∈{} 0,1 i I, k K This can be further decomposed for each i ∈ I, k ∈ K as:

()n min τik yik

st. . yik ∈{} 0,1

⎧ ()n ()n ⎪1 if τik < 0 And the corresponding solution is: y = ⎨ (11) ikˆ ()n ⎩⎪0 if τik ≥ 0 Finally, the z-subproblem is:

IJ K ()nnn ()()n () min ∑∑ϕϕμβijz ij where ij=− ∑ (ijk ) + i ij==11 k = 1

st . . zij ∈∀∈∈{} 0,1 i I, j J This can be further decomposed for each i ∈ I, j∈ J as:

()n min ϕijz ij st. . z∈ 0,1 ij {} ⎧ ()n ⎪1 if ϕij < 0 And the corresponding solution is: z()n = (12) ijˆ ⎨ ()n ⎩⎪0 if ϕij ≥ 0

Now by (weak) duality theorem if we can get a correct value of (λ, μ, β), say (λ*, μ*, β*) the optimal value θ(λ*, μ*, β*) of PCB2 resulting from using the solution for x*, y*, z* as given in (10), (11) and (12) respectively, will serve as the tightest lower bound on the optimal value of PCB1. Indeed, if (x*, y*, z*) happens to be feasible for PCB1 as well then it would also be optimal to PCB1 due to duality. For convenience reference, we now summarize these statements which follow in a straight forward way from duality theory in linear programming. First we formally state that solving PBC2 for any nonnegative 56 multipliers yields a lower bound of the optimal value for PCB1 (which is essentially a version of weak duality theorem).

Lemma 3.1: (Weak Duality Theorem) Let 0 ≤ λ* ∈ RIJK, 0 ≤ μ* ∈ RIJK, 0 ≤ β*∈ RI-1 and let x*, y*, z* be given by (10), (11), and (12) respectively. Let θ (λ*, μ*, β*) be the corresponding value of (7) which is the optimal value of PCB2--- v(PCB2), and let v(PCB1) be the optimal value of PCB1. Then θ (λ*, μ*, β*) ≤ v(PCB1) This follows directly from weak duality theorem [55], since (x*, y*, z*) is a minimizer of

PCB2, we have

θ (*λ ,μ *,β *)

IJK IK J IJ K *** **⎛⎞ * ** =∑∑∑ (cvij jk+++−+−+λμ ijk ijk ) x ijk ∑∑ s ik d k ∑ λ ijk y ik ∑∑⎜⎟ ∑ () μ ijk β i z ij ijk===111 ik == 11 j = 1 ij == 11⎝⎠ k = 1

IJK IK J IJ K ** * ⎛⎞** ≤+++−+−∑∑∑ ()cvij jkλμ ijk ijk x ijk ∑∑ s ik d k ∑ λ ijk y ik∑∑⎜⎟ ∑ ( μ ijk )+ βiijz ijk===111 ik == 11 j = 1 ij==11⎝⎠ k = 1 ∀xyz , , satisfying (2) and (6) ---or (7) and (8) ijk ik ij IJK IK IJK IJK ** =++−+− ∑∑∑cvij jk x ijk ∑∑ s ik d k y ik ∑∑∑λμ ijk() x ijk y ik ∑∑∑ ijk () x ijk z ij ijk===111 ik == 11 ijk === 111 ijk === 111 IJ−1 * +−∑∑βiiji ()zN ij==11 IJK IK ≤+∀ ∑∑∑cvij jk x ijk ∑∑ s ik d k y ik x ijk , y ik , z ij satisfying (3)-(5) in addition to (2)&(6) ijk===111 ik == 11

Thus θ (λ *,μ *,β *)≤ v (PCB1)

If we are fortunate to be able obtain the multipliers that happen to be the optimal dual variables as well, then we will have achieved the tightest lower bound. If θ (λ*, μ*, β*)

= maxθ (λ,μ,β ) , then θ (λ*, μ*, β*) is the tightest lower bound of v(PCB1). To see λμβ≥≥≥0, 0, 0 this we note that for 0 ≤ (λ, μ, β) ∈ RIJK× RIJK× RI-1, θ (λ, μ, β) defined by PCB2 is the 57

Lagrangian dual function of PCB1. Thus if θ (λ*, μ*, β*) is the optimal dual value, then v(PCB1) ≥ θ (λ*, μ*, β*) ≥ θ (λ, μ, β) for all (λ, μ, β) ≥ 0 indicating that θ (λ*, μ*, β*) is the tightest lower bound of v(PCB1).

Obtaining the tightest lower bound is still not the end of the story. The ideal situation is that the strong duality theorem is satisfied as well. That is, not only the corresponding solution (x*, y*, z*) minimizes the Lagrangian dual function, but it is also optimal to the primal PCB1. Since integer programs are non-convex, that will hardly be the case. However, if it happens that (x*, y*, z*) is feasible for PCB1, then it will also be optimal to PCB1. This follows directly from strong duality theorem [55].

Lemma 3.2: For 0 ≤ λ* ∈ RIJK, 0 ≤ μ* ∈ RIJK, 0 ≤ β*∈ RI-1, let (x*, y*, z*)--given by (10), (11), and (12)--be a minimizer of PCB2 and θ (λ*, μ*, β*) be the corresponding optimal value. If i) θ (λ*, μ*, β*) = maxθ (λ,μ,β ) , i.e. (λ*, μ*, β*) is the optimal dual λμβ≥≥≥0, 0, 0 variables ii) (x*, y*, z*) also satisfies constraints (3), (4), and (5) of PCB1

then (x*, y*, z*) is also optimal to PCB1.

Applicaiton of strong dualtiy theorem specific to model (1)-18) yields the following:

First since (λ*, μ*, β*) maximizes the dual function

IJK IK IJK ** ** θλ()λ,μ,β =++− ∑∑∑cvij jk x ijk ∑∑ s ik d k y ik ∑∑∑ ijk() x ijk y ik ijk===111 ik == 11 ijk === 111 IJK I−1 J ** * +−+−∑∑∑μβijk (xz ijk ij ) ∑ i ( ∑ zN ij i ) ijk===111 i = 1 j = 1 over (λ,μ,β ) ≥ 0 58 it is clear that

IJK IJK I−1 J ** * ** * * * ∑∑∑λμβijk (xy ijk−= ik ) 0, ∑∑∑ ijk ( xz ijk −= ij ) 0, ∑ i ( ∑ zN ij −= i ) 0 (13) ijk===111 ijk === 111 i = 1 j = 1 for otherwise these individual linear terms can be made unbounded and (λ *,μ *,β *) cannot be a maximizer of θ (λ,μ,β )

By virtue of part (a) of Lemma 3.1, (13) and the fact that (x*, y*, z*) is feasible for

PCB2, we have

v(PCB1)≥ θ (λ *,μ *,β *) IJK IK IJK IJK ******** =++−+− ∑∑∑cvij jk x ijk ∑∑ s ik d ky ik ∑∑∑λμ ijk()x ijky ik ∑∑∑ ijk ()xz ijk ij ijk===111 ik == 11 ijk === 111 ijk === 111 IJ−1 ** +−∑∑βiiji ()zN ij==11 IJK IK ** =+∑∑∑cvij jk x ijk∑∑ s ik d k y ik due to (13) ijk===111i=1 k = 1 ≥ v (PCB1) since (xyz *, *, *) satisfies (2)-(6), hence is feasible for PCB1 IJK IK ** Thus vcvxsdy(PCB1)==θ (λ *,μ *,β *) ∑∑∑ij jk ijk + ∑∑ ik k ik ijk===111 ik == 11

Sometime it is harder to know whether (λ*, μ*, β*) is an optimizing dual variables or not. It is often simpler to check for dual feasibility of (λ*, μ*, β*), primal feasibility of

(x*, y*, z*) and complementary slackness of (λ*, μ*, β*) and (x*, y*, z*). The following corollary is therefore useful.

Corollary 3.1: Let 0 ≤ λ* ∈ RIJK, 0 ≤ μ* ∈ RIJK, 0 ≤ β*∈ RI-1 (dual feasible) and let x*,

y*, z* be given by (10), (11), and (12) respectively. If in addition:

(i) (x*, y*, z*) also satisfies constraints (3), (4), and (5) of PCB1, then it is

also optimal to PCB1, and 59

IJK ** * (ii) ∑∑∑λijk()0xy ijk− ik = ijk===111 IJK ** * ∑∑∑μijk()0xz ijk− ij = ijk===111 IJ−1 ** ∑∑βiiji()0zN−= ij==11 The reasoning is exactly the same as above with (ii) replacing (13).

So if we are to use PCB2 to solve PCB1, we must find (λ*, μ*, β*) ≥ 0 such that the corresponding (x*, y*, z*) satisfies (3)-(5) of PCB1 (2) and (6) are automatically satisfied), and that either (λ*, μ*, β*) is a Lagrangian dual function maximizer or the complementary slackness condition---(ii) in Corollary 3.1—is satisfied. Should such multipliers can be found, PCB1 is solved. As mentioned earlier, since PCB1 is non- convex, it is unlikely that the primal feasibility of (x*, y*, z*) can be achieved even if the dual maximizing (or complementary slackness satisfying) multipliers could be found.

Most likely there will be duality gap:

η* = v(PCB1) - θ (λ*, μ*, β*) > 0 (14)

If the gap is small enough, then solving PCB2 can be an effective way to finding a good solution to PCB1.

In this research we will use PCB2 to form a core part of the solution strategy. The idea is still to search for dual maximizing multipliers (λ*, μ*, β*) that satisfy either (i) in

Lemma 3.2 or (ii) in Corollary 3.1. This will give us (x*, y*, z*) and θ (λ*, μ*, β*) which could serve as a possible lower bound of v(PCB1). A strategy could be developed to use (x*, y*, z*) to search for a near-by feasible solution ()x,ˆˆˆ y, z of PCB1. Thus the objective value f ()x,ˆˆˆy,z of PCB1 can then serve as the upper bound, and the estimated duality gap 60

η = f ()x,ˆˆˆy,z - θ (λ*, μ*, β*) (15) can be used to track the progress of the search or to terminate the search if the duality gap

η is sufficiently small. This strategy could be incorporated with Branch-and-Bound, where terminating a search means fathoming a node. In this application, since the problem is weakly to moderately coupled and since the commonality ratio is usually not very high in practice, experience has shown (and this will be demonstrated in Chapter 4) that

1) A solution to PCB2 provides a strong lower bound as long as optimal or near-optimal

multipliers (λ*, μ*, β*) can be found

2) A good strategy to search for a near-by primal feasible solution ()x,ˆˆˆy,z is readily

available and the resulting duality gap η defined in (15) is usually small enough that

the search can usually stop in a few iterations or a few nodes.

So in the remainder of this chapter, we will discuss procedures to find (λ*, μ*, β*), and a strategy to recover a primal feasible solution ()x,ˆˆˆy,z that is “close” to (x*, y*, z*).

3.3 Finding Multipliers

There are generally two ways to find a set of multipliers for use as (λ*, μ*, β*).

One is based on LP relaxation and the other is based on Lagrangian relaxation.

A relaxation of a minimizing problem P is obtained when some constraints, often ones that make P hard to solve, are relaxed or loosen thereby creating a problem that is easier to solve than P itself. Let R be a relaxation of P. Then it is clear that the feasible set of R subsumes the feasible set of P. Hence the optimal value of R, v(R), will always be no worse than the optimal value of P, v(P). For a minimizing problem P, we have 61

v(R) ≤ v(P)

Thus a relaxation R can always be used to create a lower bound of P. Whether this lower bound is tight and useful depends on the type of relaxation used.

The general idea in creating a relaxation is to choose for relaxation those constraints that make the original problem difficult to solve.

LP Relaxation LPr:

In PCB1, the constraints that make PCB1 a difficult problem to solve are the integrality (binary) requirements in (6). On relaxing such constraints, we have a relaxation problem that is a pure LP, which is a lot easier to solve for large problems than

PCB1 itself. An LP relaxation of PCB1 is PCB3 shown below:

PCB3:

min f (xyz , , ) = ∑∑∑cvij jk x ijk+ ∑∑ s ik d k y ik (16) ijk∈∈∈I J K ik ∈ IK ∈

subject to

∑ xrijk=∀∈∈ jk jk J, K () 17 i∈J

yxik≥∀∈∈∈ ijk i I, j J, k K () 18 zx≥∀∈ i I, jk∈∈J, K () 19 ij ijk ∑ zNij≤=− i i 1,2,..., I 1 () 20 j∈ J

0≤≤xyzijkijk 1, 0 ≤≤ ik 1, 0 ≤≤ ij 1 ∀∈∈∈ I, J, K () 21

Note that the objective function and the first four constraints of PCB3 are exactly the same as those of PCB1. The last set of constraints (21) is a relaxation of (6) in PCB1, where the integrality requirements are relaxed. This makes all variables continuous variables bounded by 0 and 1. 62

Even though v(PCB3) can be used as a lower bound for use in the Branch-and-Bound approach to solving PCB1, it is generally a weak bound for this type of problems, and its usefulness in helping reduce the number of nodes is minimal. This is why LP relaxation is not often used for that purpose. However, the usefulness in solving PCB1 is that the optimal dual variables (or multipliers) associated with (18)-(20) can be used as estimates for (λ*, μ*, β*) that we seek. As it turns out this is quite a good estimate and we will use this approach to estimate multipliers extensively in this work. Also, these dual variables being interpreted as shadow prices also have potential for use in searching for a near-by feasible point ()x,ˆˆˆ y, z in a later stage. Currently there are powerful methods for solving

LPs of different sizes. For small- to medium-sized problems, the simplex method is adequate and is readily available everywhere. For large LPs, the state-of-the-art interior point method is now the method of choice.

Lagrangian Relaxation LR:

PCB2 is in fact the beginning of Lagrangian relaxation for PCB1. The constraints that are relaxed are the coupling constraints (3)-(5). By lifting these constraints to the objective function, they are not usually satisfied during the intermediate steps of the solution process. Indeed as Theorem 3.1 or Corollary 3.1 indicates, the goal is to make these constraints satisfied, and once that happens, the optimal solution of PCB1 is reached. One main advantages of Lagrangian relaxation is that PCB2 can be quickly solved through decomposition as shown in (10)-(12), Another advantage is that its solution (x*, y*, z*) as given in (10-(12) is already in binary form. Searching for a near- 63 by feasible point will involve only switching rules many of which are available. In another word, it is easier to find a good update for upper bound to reduce the duality gap.

In general, lower bounds generated by Lagrangian relaxation are tighter than those created by its LP relaxation counterparts.

To see why, consider the following general integer program:

T n IP: min c x s.t. x ∈ X = {x| A1x ≤ b1, A2x ≤ b2, x ∈ Z+ } (22)

Its LP relaxation is:

T n LPr: min c x s.t. x ∈ XLP = {x| A1x ≤ b1, A2x ≤ b2, x ∈ R+ } (23)

And relaxing constraints A2x ≤ b2, the corresponding Lagrangian relaxation is

TT T n LR:max min ()()c +−λ Ax22λ bxAxbx where XZ 111 =≤∈ { | ,+ } (24) λ≥0 x∈X1

Without loss of generality, assume that X1 is bounded. Since it contains discrete points, it has a finite number of points, i.e.

(1) (2) (N) X1 = {x , x , …, x }

Thus, v(LR) = max min [(cT + λTA )x(n) - λTb )] λ≥0 x∈X1 2 2

v(LR) = max w T T (n) T s.t. w ≤ (c + λ A2)x – λ b2), n = 1,…,N λ ≥ 0

This is just an LP, so by the strong duality theorem,

NN⎛⎞ v(LR)==min αα (cxTn() ) c T x () n α ∑∑nn⎜⎟ nn==11⎝⎠ KN ()nn⎛⎞ () s.t. ∑∑ααnn (bAx0A22−≥⇒ ) 2⎜⎟ x ≤ b 2 kn==11⎝⎠ N ∑αn = 1 n=1

αn ≥= 0, nN 1,..,

64

NN ⎧⎫⎛⎞()n Since conv( X1 )==⎨⎬xx |⎜⎟∑∑αααnnn x , =≥= 1; 0, n1,.., N ⎩⎭⎝⎠nn==11 ⇒∈≤∈vXconvX(LR) = min cxT s.t. x ={} xAx | b , x ( ) x LR 22 1

LPn LP Now if we let XRXX111=≤∈{|xAxbx ,+ }, then LP =≤∈ {| xAx 221 b , x }

LP Since X1 ⊆ conv(X1) ⊆ X1 , we have X ⊆ XLR ⊆ XLP.

Hence v(IP) ≥ v(LR) ≥ v(LPr) (25) as we set out to show.

So again we can see from the above that in general the lower bounds generated by LR can do no worse than ones generated by LP relaxation. In the worst cases, LR bounds and

LPr bounds are the same. Unfortunately, for this particular application, the worst case holds. That is, it can be shown that lower bounds generated by PCB2 is exactly the same as the bounds generated by PCB3

v(PCB2) = v(PCB3) (26)

This is a consequence of what is called integrality property. Again, consider the same IP as (22), LR as in (24) and LPr as in (23). We pay particular attention to the feasible sets of the three problems:

n For IP; X = {x| A2x ≤ b2, x ∈ X1}, where XZ111=≤∈{|xAxbx ,+ }

For IR: XLR =|{xAx22≤∈ b , x conv() X 1}

LP LP n And for LPr: XLP = {x| A2x ≤ b2, x ∈ X1 }, where X1 = {x| A1x ≤ b1, x ∈ R+ } The Integrality Property of LR is defined in terms of what happens to the LP relaxed

LP LP version of X1, namely X1 . We say that LR (hence X1 ) has integrality property if any solution of LP1 defined as

T LP n LP1: min d x s.t. x ∈ X1 = {x| A1x ≤ b1, x ∈ R+ }, 65

n is always an integer regardless of the cost vector d ∈ R . Hence any solution of LP1 will always lie in the convex hull of X1—conv(X1). Finally any solution of LPr lies in

X LR =|{xAx22≤∈ b , x conv() X 1}.

Hence v(LPr) ≥ min cT x = v(LR). But since v(LR) ≥ v(LPr) in general, we have v(LR) = xX∈ LR v(LPr) , when the Lagrangian relaxation LR has integrality property.

LP A graphical illustration of an X1 with integrality property is shown below

x3

x2 0, 0, 1

0, 1, 0

x1

0, 0, 0 1, 0, 0

Now we show that PCB2, the Lagrangian relaxation for PCB1, does indeed have integrality property. Once the complicating constraints (3)-(5) have been lifted to be

Lagrangianly relaxed in the objective function, the remaining constraint set X1 consists only of (2) and (6). That is:

I ⎧⎫IL JK IJ XxjkL1 ==∈∈∈∈ ⎨⎬()xyz, , |∑ ijk 1, (, ) , x {0,1} y {0,1} , z {0,1} ⎩⎭i=1

where LjkJKr=∈×={} ( , ) |jk 1

The LP relaxed version of X1 is: I LP ⎧ IL JK IJ ⎫ XxjkL1 ==∈∈∈∈ ⎨()xyz, , |∑ ijk 1, (, ) , x [0,1] y [0,1] , z [0,1] ⎬ ⎩⎭i=1 66

T T T LP Clearly any solution to the LP: min (a x + b y + c z) s.t. (x,y,z) ∈ X1 will always be at

LP an integer vertex of X1 regardless of the cost vector (a,b,c), thus the LR of PCB1 as specified by PCB2, has the integrality property. Since PCB3 is an LP relaxation of PCB1, we have v(PCB2) = v(PCB3) as we set out to show.

So we have now shown that the LR of PCB1 in the form of PCB2 does not produce any better lower bound of v(PCB1) than the LP relaxation PCB3. Would there be benefits at all in using PCB2? The answer is yes. We reiterate the following uses of PCB2: a) Once a good approximation of (λ*, μ*, β*) has been found, PCB2 can be used to

quickly solve for (x*, y*, z*). And since this solution is already binary and is

expected to very close to the primal optimal solution (the dualiy gap is expected to be

small), a neighborhood search using one of the many existing neighborhood search

techniques should be effective in producing a near-by primal feasible point that is

near optimum if not optimum. This will be the main use of PCB2 in this work. b) We can also use PCB2 to search for (λ*, μ*, β*) using an iterative scheme. Since the

quality of the bounds produced will not be better that the LPr-produced bounds, we

will choose this method only if it can accomplish the job faster and cheaper. A

typical iterative scheme to find (λ*, μ*, β*) is the subgradient method.

The subgradient method is an adaptation of the steepest descent gradient method for general nonlinear unconstrained optimization problems. At each step a search direction from the current iterate is found, a stepsize along such direction is determined, and the iterate is updated by moving along the search direction at the distance dictated by the stepsize. In a typical steepest descent method for maximizing an unconstrained function, the gradient of the objective function at the current iterate, which is known to be the 67 direction along which the objective function increases at the fastest rate, is used as the search direction. The stepsize is either fixed or determined on the fly using a line search technique (optimal or inaccurate line search). When the search space is constrained by bounds (e.g. by nonnegativity), the gradient emanating from a boundary of the search space is adjusted to ensure that the search will not continue out of bounds. This is called, subgradient hence the name “subgradient” method.

For our problem the objective function we want to maximixe is the Lagrangian dual function θ (λ, μ, β) and the maximizing parameter set we would like to find is π = (λ, μ,

β). Starting with an initial π (0) = (λ(0), μ(0) , β(0)), suppose the current iterate after t iterations is π (t) = (λ(t), μ(t) , β(t)). A linear approximation to the gradient of θ (λ, μ, β) at

π (t) is

⎛⎞xy()tt− () ()t ⎜⎟()tt () g = ∇=−θ (,,)λ μ β ()t xz (27) π π ⎜⎟ ⎜⎟()t ⎝⎠IzN− where (x(t), y(t), z(t)) is an optimal solution to PCB2 with (λ, μ, β) = (λ(t), μ(t) , β(t)). It is found using (10)-(12). However, since some of the variables may be at their bounds, i.e. zero, the corresponding components of g(t) have to be adjusted to ensure that variables will not go negative. The adjustment rule is as follows: Suppose the nth component in the

(n) (n) vector π is at zero, i.e. πn = 0, then the adjusted gn = 0 if gn ≤ 0, and is unchanged otherwise. The updated gradient using this rule is now called the subgradient and is still denoted by g(t) , for convenient. It will now be used as a search direction as follows;

(1)ttt+ () () (28) ππ=+αtg

The stepsize αt can be either fixed for convenience or computed using 68

∗ ()t φt (UB− v(PCB2) ) α = t 2 xy()tt− () xz()tt− () IzN()t −

(t) where φt is a scalar satisfying 0 <φt ≤ 2, v(PCB2) is the optimal value of PCB2 evaluated at (x(t), y(t), z(t)), and UB* is the best upper bound, which is the minimum between earlier UB* and upper bounds determined by applying neighborhood search for near-by primal feasible solution around (x(t), y(t), z(t)). An initial upper bound UB0 is determined by applying the Greedy Board Algorithm.

The iterative update process continues until either |g(t)| is sufficiently small or the complementary slackness condition is close enough to be satisfied. That is, until the norm of

⎧⎫ ⎪⎪∑∑∑()(λijkxy ijk− ik ) ⎪⎪ijk ⎪⎪ εμ=−⎨⎬∑∑∑()()ijkxz ijk ij ⎪⎪ijk ⎪⎪I−1

⎪⎪∑∑βiiji()zN− ⎩⎭ij

is sufficiently small.

Like all first-order gradient-based method, the subgradient method makes a major trade off between computational cost and speed of convergence. Being the simplest gradient- based method, it is relatively simple with low computational cost per iteration. However its convergence rate is linear (slow). Convergence is typically fast initially, but becomes increasingly slow after a few iterations. 69

The simplicity in solving PCB2 given the multipliers and the simplicity and low cost per iteration of the iterative method such as the subgradient method above obviously come at a price. Overall, it cannot compete with LP relaxation-based methods in terms of both speed and quality of estimators of (λ*, μ*, β*). So for small to not-too-large problems, the LP relaxation based method using PCB3 will be preferred to get estimates of (λ*, μ*,

β*). For very large problems when LP solvers may have difficulty, the subgradient method may then be preferred.

3.4 The final step: Searching for the Primal Solution

To close out the discussion on solution strategy, it remains to determine how to find an optimal or near-optimal primal solution from a dual optimal solution. Here the general idea will be based on neighborhood search, since for this type of problems, the decomposition-based procedure such as in PCB2 is expected to produce values of the primal variables that are close to the primal solution. They are already binary, and are close to being primal feasible, which is one of the key criteria for being primal optimal as indicated by Theorem 3.1 or Corollary 3.1. Infeasibility is due in large part to the capacity constraints (5) and in smaller part to physical constraints (3) and (4).

Infeasibility-specific heuristics and switching rules can be put together to overcome such infeasibilities effectively. Techniques employed in this work are now discussed.

Lower Bound Maintaining Algorithm (LBM)

Even though, (10)-(12) can be used to find optimal solutions to the Lagrangian dual problem PCB2 easily, the resulting solutions are mostly infeasible to the original problem (PCB1). In solving the x-subproblem separately from y- and z-subproblems, 70 yielding (10), (11) and (12) respectively, the physical constraints (3) and (4) may be violated. To avoid violating these constraints without significantly degrading the lower bound, we will try to re-solve the x-subproblem by making switches in values among the xijk whose coefficients differ from one another by no more than ε . This will help preserve the lower bound previously obtained by solving PCB2 using (10)-(12). The switches will be made in recognition of the current values of y and z, and will be done to make sure that (3) and (4) are now satisfied. The proposed switching heuristics is described below:

* LetIkik= {iI∈=∀∈ ,|y 1}; kK

** LetJiij= {j ∈=∀∈Jz ,| 1}, i J

' ijJkKjk=∀∈∀∈arg min{ω ijk } , , iI∈ * IiIjk=∈ k|(ωω ijk −' ≤ ε ) {}ijkjk '* ⎧iifIjk, jk = 1 * ⎪ i jk = ⎨ '' * ⎪icvcvifIjk=−arg min ij jk* jk , jk > 1 * {}ijjk ⎩ iI∈ jk * ⎧1, if j∈ J * * ⎪ i jk xjJijk =∀∈⎨ ,∀∈kK ⎩⎪0, otherwise

After adjusting the values of xijk as above, the previously assigned board-to-machine

(values of yik) and/or component-to-machine (values of zij) may become unnecessary. For example, after adjusting xijk, if the new xijk = 0 for all j, then there no need to keep board k assigned to machine i. If the current yik is not already zero, then it should be switched to zero to reduce it setup cost without violating feasibility. This type of reassignment is illustrated below: 71

** yxjik=⇔∃11; ijk = ∀ ** yxjik=⇔∀=00; ijk ∀ ** zxkij=⇔∃11; ijk = ∀ ** zxkij=⇔∀=00; ijk ∀ * Let Jiij= {,|1}, j∈=∀∈ J z i J

This switching/re-assignment process can be repeated iteratively until all constraints in

(3) and (4) of PCB1 are satisfied, or until no more switching/re-assignment is possible,

* i.e. when both Ik (the set of machines where board k is assigned) and Ji (the set of components assigned to machine i) are empty. In the latter case, we need to apply the problem space search algorithm (discussed in next section) to find a primal feasible solution in the neighborhood.

Now we look at how to overcome infeasibility due to the capacity constraints (6).

This happens frequently. We propose two heuristics for finding a primal feasible solution. They are: 1) LBM heuristics + Greedy Board algorithm and 2) LBM heuristics

+ Greedy Component algorithm. These will be described in the next two subsections. The overall concept is illustrated in Figure 3.1.

72

Solve z-subproblem Solve y-subproblem ⇒ Select z*ij ⇒ Select y*ik

Reassign * Ji y*ik Ik

Solve x-subproblem ⇒ Select x*ijk without violating constraint(2) and constraint (3) ⇒ Reassign y*ik and z*ij

Reassign z*ij

Apply Greedy Board/or Greedy Component ⇒ Select z*ij without violating capacity constraint (4)

Figure 3.1 Conception of LBM-Based Feasible Solution Finder

LBM heuristics + Greedy Board

Violation of constraint (6) in PCB1 arises when the number of board types being assigned to an automatic/semiautomatic insertion machine results in the number of components assigned to the machine exceeding the machine capacity. The main idea of the Greedy Board algorithm is to move a board type associated with the over-capacity of machine ()I out of the machine to an under- capacity machine (I ) or to the manual insertion process by considering the minimum total cost of moving the board between machines. let I=∈{ i I| Jii < N }

IiIJN=∈{}| ii >

* Kijki=∈{}kKr|1; =∀∈j J 73

* Consider all board types ()Ki that are associated with the violation of the capacity of

machine i and the component set Ji . At the over-capacity machine i()iI∈ , we will move one board type at a time, each time we recomputed the minimum total cost. The process is repeated until machine i is no longer over-capacity. An under-capacity machine m

(including manual insertion machine) will be considered for re-assignment of board type k, if adding all component types on the board (all j∋ rjk = 1) will not make the capacity of machine m()mI∈ exceeded. The total cost of moving board k across to machine m can be calculated as follows:

⎧⎫ ⎪⎪** ρ * =−min ⎨⎬ (1ysdmk ) mk k +∑ {} [(1 −− zc mj ) mj cv ij ] jk mk mI∈ ⎩⎭⎪⎪jJ∈=ijk&1 r Therefore, the minimum total cost of moving board k across to machine m is

ρρ* = min mk***{ mk *} kK∈ i Then, we will move all component types associated with board k* on machine i to

* machine m* and remove board type k* from the set (Ki ) and all component types

associated with k* from the set Ji . We also reset

** yifkkik ==0 *** ymk ==1 if m m and k = k ⎧ ⎫ * ⎪ ⎪ zij=∈=0|11 if⎨ j J i r* and∑ r jk =⎬ jk * ⎩⎭⎪ kK∈ i ⎪ z**=∉1|1 if j J and m = m r = mj{} m jk* *** KKii=−{} k *** KKmm=+{} k 74

⎧ ⎫ ⎪ ⎪ JJii=−{} JJ ii′′;|11 =∈⎨ jJr i* = andr∑ jk =⎬ jk * ⎩⎭⎪ kK∈ i ⎪ JJ=+{} JJ′′;|1 =∈ jJr = mm mm{} ijk*

LBM heuristics + Greedy Component

If the capacity constraint (6) is violated because of over-assignment of components to a machine, we can use the following LBM heuristics + Greedy Component to remove component types from over-capacity machine ()I to under- capacity machine m (including manual insertion machine) by minimizing the total cost of moving component across machine. Again, only one component type is moved at a time and the selection of the component type to move is based on the minimum total cost of

moving component type j across machine m(mI∈ ) . Consider Ji the set of component types that are associated with the violation of the capacity of machine i()iI∈ and have

° negative weighting costs in the z-subproblem. Component types in J i are considered candidates for moving if their weighting costs are between the highest weighting cost and the weighting costs at Ni.

JjJ= ∈−≤ |ϕ ϕε iiijij{ N }

The total cost of moving component type j across to machine m can be calculated as

⎧⎫ ⎪⎪** ρ * =−−−−min ⎨⎬∑ [((1zcmj ) mj cv ij ) jk ] (1 y mk ) sd mk k mj mI∈ ⎪⎪* ⎩⎭kK∈=ijk&1{} r

Thus, the minimum total cost of moving component j across machine m is 75

ρρ* = min mj** { mj * } jJ∈ i

Then, we will move component type j* from machine i to machine m* and remove

component type j* from the set ( Ji ) and all board types k that are associated with component type j* to K* . We also reset ( m* )

** yifkKrandrik=∈0|11 i* =∑ jk = jk jJ∈ i yifkKr**=∈1|1 = mk i jk* ** zifjjij ==0 *** zifmmjjmj ===1 * JJii=−{} j * JJmm=+{} j ⎧ ⎫ **⎪ * ⎪ KKii=−{} KK ii′′;|11 =∈⎨ kKr i* = andr∑ jk =⎬ jk ⎩⎭⎪ jJ∈ i ⎪ KK**=+{} KKkKr′′;|1 =∈ * = mm ii{} ijk*

Problem Space Search Method

There is a possibility that the LBM algorithm above will stop without finding a feasible solution. This means that the domain of the neighborhood that we search using LBM algorithm above does not contain a feasible solution of PCB1. We will have to increase the domain of the neighborhood search by perturbing the original search domain. A procedure that is suitable for the job is Problem Space Search (PSS) described in [25, 61].

Like stochastic local searches described in the previous chapter, we allow the search to 76 move into zones that may not be the best choice possible (e.g. ones that may not correspond to the minimum costs, etc), or ones that may not even be feasible. Combining the idea of PSS with LBM algorithm, we evaluate multiple solutions obtained from multiple executions of the LBM algorithm by using temporarily perturbed weights as input. PSS is carried out by generating artificial “neighbors” by temporarily perturbing the problem input data and finding feasible solutions to the perturbed problems. We

temporarily modify the weightsωijk,,τϕ ikand ij ’s in a controlled manner and find the corresponding heuristic solutions to them. We denote perturbed weights by

ωijk′′,,τϕ ikand ij ′’s and generate them as follows:

ωωijk′ =+ΩΨ ijk ××u

ττik′ =+Ω×Ψ ik × u

ϕϕij′ =+ΩΨ ij ××u

⎛⎞∑∑∑ωijk++ ∑∑τϕ ik ∑∑ ij ⎜⎟ijk ik ij Ψ=⎜⎟ ⎜⎟()()()ijkikij×× +× +× ⎝⎠ 1 Ω= 12+ r where Ω is the parameter that controls the amount of perturbation, r is the number of perturbed weights, Ψ is the average of all original weights, and u~U(-1,1) is a uniform random number between -1 and 1. If the LBM finds an infeasible solution or bad feasible solution, the problem space search will be called to generate new weights cost and input to LBM until a feasible solution is found. The neighborhood size is determined by comparing the best feasible solution.

77

3.5 Implementing the LBM Algorithm

Now we can put together the various pieces discussed into procedural steps for implementation as shown below. Two versions of the procedure are summarized in the table and flow-chart below. One version uses the Greedy Board heuristics to resolve feasibility of the capacity constraints (5) of PCB1, while the other version uses the

Greedy Component heuristics to do the same. Both versions are used in tandem and the better solution of the two is selected as the final solution for that step.

Greedy Board Greedy Component

Step1: Decomposition/Partitioning Step1: Decomposition/Partitioning • Apply Lagrangian Relaxation as • (Same Greedy Board) PCB2 by Dualizing constraint (3), (4) and (5) in PCB1 • Decompose PCB2 into x- subproblem, y-subproblem and z- subproblem

Step2: Finding Multipliers Step2: Finding Multipliers • If the PCB3 is a small-to-medium- • (Same Greedy Board) sized, then solve PCB3 and use LP dual solution associated with the constraint (18),(19) and (20) in PCB3 to be an good approximation of multiplier vector (λ*, μ*, β*) • Else apply the subgradient method to find the optimal multipliers (λ*, μ*, β*)

Step3: Finding the Solution Step3: Finding the Solution • Solve separately of x-subproblem, • (Same Greedy Board) y-subproblem and z-subproblem to obtain ()x,ˆˆˆy,z solution (by

inspection

• Set v( PCB2) to be lower bound

(LB)

• If ()x,ˆˆˆy,z solution is also an 78

Greedy Board Greedy Component

feasible solution in PCB1, then set f ()x,ˆˆˆy,z of( PCB1) to be an upper bound (UB) and compute a duality ⎛⎞UB− LB gap ⎜⎟, and go to Step 6. LB ⎝⎠ • Else go to Step 4

Step4: Feasible Solution Step4: Feasible Solution • Apply LBM heuristics + Greedy • Apply LBM heuristics + Greedy Board algorithm by moving a board Component algorithm by moving type k that is the minimum total a component type j that is the cost of moving board k across to minimum total cost of moving machine m from violation machine component j across to machine m i of constraint (5) until satisfy. from violation machine i of • If the feasible solution ()x,ˆˆˆy,z of constraint (5) until satisfy. PCB1 is found, then compute a • If the feasible solution f ()x,ˆˆˆy,z of duality gap with f ()x,ˆˆˆy,z =UB and PCB1 is found, then compute a go to Step 6 duality gap v(PCB1) =UB and go • Else go to Step 5 to Step 6 • Else go to Step 5 Step5: Neighbor Solutions • Apply the problem space search Step5: Neighbor Solutions (PSS) to PCB2 by perturbation • (Same Greedy Board ) weight multiplier vector (λ, μ, β) and go to Step 3

Step6: Stopping Criteria • If the duality gap < ε or number Step6: Stopping Criteria iterations reach limitation, then • (Same Greedy Board) STOP • Else go to Step 5

Table 3.1 The Procedure of LBM Algorithm 79

Figure 3.2 Flow Process for LBM algorithm

3.6 Computation Complexity

Traditionally, the size of an instance of an optimization problem has been described by the number of variables and the number of constraints. These two parameters may not be adequate. There are algorithms whose number of steps depends explicitly on the magnitude of the numerical data. An analysis of computational complexity often provides a good insight into the degree of difficult in solving a problem. 80

PCB1 is obviously a pure 0-1 integer programming and is known to be NP-hard. If solved by a brute-force enumerative algorithm, we will have to look atOmn(2n ) solutions, where m is number of constraints and n is number of decision variables.

Applying the proposed LBM algorithm, the computational complexity will be much less.

Here we will briefly look into the complexity of the proposed algorithm by examining the complexity of steps associated with the final search for the primal feasible solutions, i.e. steps 3 to 6. Computation complexity in each step is as follows:

• Each time PCB2 is reached:

o each sub-x-subproblem of PCB2 has one sort over I is performed,

leading to I log(I) computations and x-subproblem is executed γJ

times, so the computation required is γJI log(I)

o each y-subproblem has one sort over K is performed, leading to K

log(K) computations and y-subproblem is reached I times, so the

computation required is IK log(K),

o finally, each z-subproblem has one sort over K is performed,

lending to K log(K) computations and z-subproblem is reached I

times, so the computation required is IJ log(J).

o Therefore, the computation complexity in (10)-(12) is I(γJ log(I)+

K log(K)+ J log(J)) steps.

• Each time the LBM algorithm is executed, there will be at most I sorting

and each sort over I has complexity of I log(I). This leads to I2 log(I)

computations. LBM algorithm is reached at most γJ times, so an upper limit

on the computation required for this step is γJ I2 log(I) . 81

• If the Greedy board step is reached,

* let I =∈{iIJ|,|,ii < N} I =∈{ iIJ iii > N} K ={ kKr ∈ |1; jki =∀∈ jJ} at most K* sums, all I sum, one sort over I and one sort over K* are performed, leading to K*2IKI 2log( * ) and the Greedy Board is reached at most K* times, P candidates for all I machines, so an upper limit on the computation required is PIK*3 I 2log( K * I ) steps.

• If the Greedy component step is reach, let JjJ= ∈−≤ |ϕ ϕεat most i{ i ij ijN } J sums, all I sum, one sort over I and one sort over J are performed, leading to JI22log( JI ) and the Greedy component is reached at most J times, P candidates for all I machines, so an upper limit on the computation required is PIJ22 Ilog( JI ) steps.

The step involving PSS is reached at most R times, thus an upper limit on the computation complexity of PCB2 is RIJ(γ log( I )+ K log( K )+ J log( J )) steps, LBM is

RγJ I2 log(I).

Therefore, an upper limit on the computation required to implement LBM heuristics combined with:

• Greedy board and problem space search algorithm is RIJ(γ log( I )++ K log( K ) J log( J ))+ RγJ I2 log(I)+ PIK*3 I 2log( K * I ) steps

• Greedy component and problem space search algorithm is RIJ(γ log( I )++ K log( K ) J log( J ))+ RγJ I2 log(I)+ PIJ22 Ilog( JI ) steps

82

4 Test Problems and Computational Results

4.1 Test Problems

We use random variate generation to create test problems for testing efficiency of our algorithm. Tests are performed for both single automatic/semiautomatic insertion machine case and multiple machines case. Individual machine may have equal or unequal costs for inserting component and setting up board. Multiple insertion machines may be identical or unidentical. Only one manual insertion machine (process) needs be considered. In our test problems, we emphasize cases with multiple insertion machines and board splitting because we have not yet found any algorithm to solve them except for

Greedy board (GRD) heuristics and Stingy component algorithm heuristics as proposed in [4]. Therefore, we show efficiency of our algorithm by comparing the results with

Greedy board algorithm (GRD) based on a large number of generated test problems. For cases of multiple machines with board splitting, the PCB problem can be classified into three categories, namely:

1) Unidentical automatic or semiautomatic machines (each machine has variety of

processing and setup costs)

2) Identical automatic or semiautomatic machines (all machines have the same

processing cost and set up cost)

3) Unidentical/Identical automatic or semiautomatic machines (each machine has a

processing cost and a set up cost)

83

Based on data observed from real operations of many brands of automatic insertion machines, component insertion times (cost) are randomly generated from the range of 1 to 10 (0.1 second). The set up time (cost) is between 5 to 10 times of the processing time for automatic insertion machines. The demand k per period is generated randomly within the range 100 to 300. The capacity of individual automatic insertion machines varies between 50% and 80% of the average number of components per machine (if it is to be divided equally among all machines). The commonality ratio--component-board pair ratio--- is selected between 1.2 and 1.3, based on a case study at Hewlett Packard [4, 11] which uses 1.25. The manual insertion machine is distinctly slower than the automatic or semiautomatic insertion machine and a variety of manual insertion machines are generated to their insertion time and set-up time to be 1.5, 2, 5, or 10 times those of automatic or semiautomatic insertion machine time. The difference in times on the manual machine and times on automatic/semiautomatic machines can greatly influence the efficiency of the algorithm.

Based on the above generation strategy, the problems generated can be categorized into four groups based on the levels of challenge they represent. Each group will have seven problem sizes (up to 5 automatic or semiautomatic machines, up to 1,000 component types and up to 100 board types) containing up to 13,000 binary decision variables. The following table shows characteristics of test problem designs:

84

Parameters to For Identical machines case For Unidentical machines case generate Insertion cost cij ci ~ U[1, 5], cij= ci cij ~ U[1, 10]

i = 1,..,I-1 cij = integer cij = integer

Insertion cost cIj integers of cij× m1 integers of max cij j×m1 iI∉ where: Type A, m1 = 1.5 where: Type A, m1 = 1.5 Type B, m1 = 2 Type B, m1 = 2 Type C, m1 = 5 Type C, m1 = 5 Type D, m1 = 10 Type D, m1 = 10

Setup cost sik integers of cij ×u1; sij= si ∑ cij jJ∈ where u ~ U[5,10] sik = ×u1; 1 J

where u1 ~ U[5,10]

m Setup cost sIk integers of sik× 1 integers of max sik × m1 iI∉ where: Type A, m1 = 1.5 where: Type A, m1 = 1.5

Type B, m1 = 2 Type B, m1 = 2

Type C, m1 = 5 Type C, m1 = 5

Type D, m1 = 10 Type D, m1 = 10 Board demand per Random integers U[100, 300],

planning period dk

Machine capacity Ni uJ× uJ× Integers of 2 , Integers of 2 , I I

where u2 = 0.8 where u2 ~ U[0.5,0.8]

Number of njk×dk,

components for where njk ~ U[1, 10] board k per planning njk = integer

period, vjk

Table 4.1: Characteristics of Test Problem Designs 85

4.2 Computational Results

The proposed algorithm is implemented entirely in Matlab version 7.0.4 and run under Microsoft Window XP Home . We performed computational runs on a PC with the Intel Pentium 4 Hyper-threading 3.0GHz and 2GB RAM.

The test results can be divided into two parts. The first part will be the test on performance. Even though we conducted these tests last, we present them first to get to the point. Here we would like to test the speed and solution quality of our proposed algorithms against the arguably best available general-purpose IP solver—CPLEX, as well as against the greedy-based algorithms--the only known methods available to solve operation assignments in PCB assembly with multiple machine with no board splitting.

In the second part, we present results of what we might call sensitivity and robustness tests. Here we will show that the proposed algorithms perform quite well in a wide variety of problem conditions and to see which aspects of the problem that affect the performance of the algorithms the most.

4.2.1 Performance Tests

We will present results on efficiency tests for single and multiple insertion machines. Efficiency is measured in terms of both computational time and % above true optimum. CPLEX 8.1 is used to find the true optimal solution whenever it can do so in reasonable time. Here we wish to show that the proposed LBM algorithms perform competitively with the best of the IP optimizers CPLEX for multiple machines/no board splitting problems and that they can even efficiently handle larger problems that CPLEX 86 cannot solve in reasonable time. Since Greedy-based heuristics—particularly Greedy board heuristics (GRD) are the only known methods available to solve multiple machines with board splitting PCB operation assignments problem, we will use them in this test of performance too. With multiple automatic/semiautomatic machines, whether or not machines are identical affect how quickly we can complete the assignments, we will perform the test and present the results in three separate classes, namely multiple identical machines, multiple unidentical machines, and multiple unidentical/identical machines. We generate test problems according to the test problem generation strategy described in Section 4.1. Two cases of single machine with identical process and unidentical process will also be presented. We select problems so that all problem types and all 7 problem sizes are represented. Results will be presented numerically as well as graphically whenever appropriate.

The performance measures we will use for comparison here are computation time in second and the percent above optimal as computed by

⎛⎞upper bound− optimal solution ⎜⎟100%. The optimal solution is one obtained by ⎝⎠optimal solution

CPLEX whenever it can get one. CPLEX is set to run until either it finds an optimal solution, or it exceeds a time limit of 3600 seconds. In this latter scenario, the best solution found by CPLEX up to that point is used as an approximate of the optimal solution. The upper bound corresponds to the best primal feasible solution found by the method in question. The CPU time in second of GRD and LBM run on MATLAB application but CPU time* of the CPLEX solution run by CPLEX software. Since from

[62], C++ programs is more than twice as fast as MATLAB –based programs, we will 87 keep a conservative approximate factor of two in mind when comparing CPLEX time and

MATLAB time.

Single Machine Test:

We begin with the simplest test cases, ones with a single automatic machine.

When paired with the manual process, we consider both the case where the two machines have identical processing time and setup time, and the case where they do not. Five test problems all with 1000 component types and 100 board types were carefully selected and generated, with the results shown in Tables 4.2 and 4.3 below.

Num. GRD Time %Above LBM Time %Above Optimal Time* Test Var. (UB*) (s) Optimal (UB*) (s) Optimal Value (s) 1 4712 1.1E+07 0.3 8.18% 9708428 0.6 0.00% 9708428 <0.1 2 4786 1.1E+07 0.2 6.68% 9844068 0.4 0.00% 9844068 <0.1 3 4774 9399447 0.2 12% 8375837 5 0.02% 8374307 <0.1 4 4766 1.2E+07 0.2 8% 11149420 0.4 0.00% 11149420 <0.1 5 4762 1.3E+07 0.2 10% 11411255 6 0.00% 11411255 <0.1 Table 4.2 Results: Single Machine with identical processing and set up times

Num. GRD Time %Above LBM Time %Above Optimal Time* Test Var. (UB*) (s) Optimal (UB*) (s) Optimal Value (s) 1 4740 1.1E+08 0.2 21% 91464824 0.4 0.00% 91464824 <0.1 2 4714 1.1E+07 0.2 21% 8909532 6 0.00% 8909532 <0.1 3 4792 9406898 0.2 25% 7517439 0.6 0.00% 7517439 <0.1 4 4726 9217746 0.2 19% 7719932 0.5 0.00% 7719932 <0.1 5 4714 9558683 0.2 25% 7654100 0.6 0.00% 7654100 <0.1 Table 4.3 Results: Single Machine with unidentical processing and set up times

88

In both cases, CPLEX is unbeatable. It found the optimal solution in less than 0.1 seconds. This is to be expected since CPLEX is best in what it does, and this problem is way too small to give CPLEX any trouble. The proposed LBM algorithm also have no problem handling both types of problems finding the optimal solution to all test problems in less than a second to 6 seconds in MATLAB time. It appears that GRD have difficulty finding even near-optimal solution with the unidentical case being the more difficult of the two. Again, this is to be expected for algorithms such as GRD. A greedy-based algorithm is a sequential decision type of algorithm where “regrets” could post a major problem. When the coefficients of objective function are identical or about the same, regrets are minimal since choosing which one first, does not appear to matter.

Multiple Unidentical Machines:

Here we carefully select problems that represent all problem types and problem sizes. In all cases and all problem sizes CPLEX can still find optimal solution in very good time. The proposed LBM also solved all problems very effectively. It achieved optimal solutions to the majority of test problems with the rest being very near-optimal with the worst duality gap of about 0.7%. GRD again produces poor quality solution to all problems with the best duality gap around 10%. It appears in this case that no problem aspects seem to affects the performance (speed and solution quality) of CPLEX and

LBM, except size and perhaps symmetry. In this “unidentical” case where symmetry appears to be far fetched, both CPLEX and LBM perform well finding optimal or near- optimal in a very good time. This makes sense since both methods rely on identifying

“differences” or “distinction” to help eliminate “solutions”. If all solutions look alike, it is hard to identify which ones to eliminate. We should see more evidence to support this 89 observation when we look at cases with identical machines. Size clearly affects speed of both methods. As shown in Figure 4.1, for both methods, the increase in computational time in the lower range of problem size seems to be gradual and linear. But at the larger end, the increase seems to more rapid and exponential, with CPLEX appears to have a higher rate of increase than LBM. For LBM, the most expensive step would involve the final step in searching for the primal feasible solution, particularly the Problem Space

Search (PSS). The more PSS has to be performed, the more time it will take LBM to do the job. LBM 25 CPLEX 20

15

10 CPU Time(s) 5

0 0 5000 10000 15000

Problem Size

Figure 4.1 Average CPU Time between LBM and CPLEX for Unidentical Processes

Num. GRD Time %Above LBM Time %Above Optimal Time* Test Var. (UB*) (s) Optimal (UB*) (s) Optimal Value (s) A 153 158809 0.22 1.71% 156135 0.53 0.00% 156135 <1 B 153 267980 0.21 12.00% 239271 0.56 0.00% 239271 <1 C 153 360724 0.24 20.97% 298202 0.49 0.00% 298202 <1 D 147 622585 0.23 30.88% 475689 0.5 0.00% 475689 <1 Average 0.52 Average <1 Table 4.4 Results: Problem Size 3×20×5 90

Num. GRD Time %Above LBM Time %Above Optimal Time* Test Var. (UB*) (s) Optimal (UB*) (s) Optimal Value (s) A 759 878785 0.57 9.36% 804408 1.34 0.10% 803598 <1 B 771 1019878 0.58 12.44% 908189 1.03 0.12% 907077 <1 C 780 2180756 0.51 19.15% 1830311 0.92 0.00% 1830311 <1 D 774 4325385 0.51 33.10% 3249706 0.9 0.00% 3249706 <1 Average 1.05 Average <1 Table 4.5 Results: Problem Size 3×100×30

Num. GRD Time %Above LBM Time %Above Optimal Time* Test Var. (UB*) (s) Optimal (UB*) (s) Optimal Value (s) A 1012 947684 0.56 9.65% 864276 0.96 0.00% 864276 <1 B 1008 923089 0.34 11.20% 832342 1.37 0.27% 830093 <1 C 1000 2128940 0.56 21.66% 1749865 1.18 0.00% 1749865 <1 D 1032 3941026 0.44 35.14% 2916320 1.05 0.00% 2916320 <1 Average 1.14 Average <1 Table 4.6 Results: Problem Size 4×100×30

Num. GRD Time %Above LBM Time %Above Optimal Time* Test Var. (UB*) (s) Optimal (UB*) (s) Optimal Value (s) A 1250 773984 0.62 19.19% 653727 1.52 0.67% 649366 2 B 1300 1167431 0.58 22.38% 953978 1.35 0.00% 953978 <1 C 1250 1818603 0.63 35.34% 1343725 1.26 0.00% 1343725 1 D 1290 5571711 0.5 32.57% 4202848 1.29 0.00% 4202848 <1 Average 1.36 Average 1.5 Table 4.7 Results: Problem Size 5×100×30

91

Num. GRD Time %Above LBM Time %Above Optimal Time* Test Var. (UB*) (s) Optimal (UB*) (s) Optimal Value (s) A 7017 7600219 2.7 16.52% 6522571 7.62 0.00% 6522571 1 B 7008 8157068 2.60 20.96% 6743519 8.22 0.00% 6743519 <1 C 7155 16872433 2.68 44.81% 11651384 8.95 0.00% 11651384 <1 D 7083 35481656 2.66 33.40% 26598107 8.83 0.00% 26598107 <1 Average 8.40 Average 1 Table 4.8 Results: Problem Size 3×1000×100

Num. GRD Time %Above LBM Time %Above Optimal Time* Test Var. (UB*) (s) Optimal (UB*) (s) Optimal Value (s) A 9500 8089695 2.8 14.61% 7085451 15.65 0.38% 7058411 3 B 9344 8942086 3.30 23.83% 7230615 12.03 0.13% 7220980 2 C 9436 18287559 3.18 32.78% 13773160 11.2 0.00% 13773160 1 D 9460 28249976 3038 39.76% 20229555 13.63 0.08% 20213100 1

Average 13.13 Average 1.75 Table 4.9 Results: Problem Size 4×1000×100

Num. GRD Time %Above LBM Time %Above Optimal Time* Test Var. (UB*) (s) Optimal (UB*) (s) Optimal Value (s) A 11910 7347468 3.29 26.43% 5852501 24.21 0.71% 5811344 26 B 11885 9828281 3.23 24.38% 7957270 26.97 0.70% 7901797 5 C 11730 17114539 3.62 45.40% 11776813 19.82 0.05% 11771000 2 D 11770 35964324 3.12 39.85% 25722317 19.69 0.02% 25717000 2

Average 22.67 Average 8.75 Table 4.10 Results: Problem Size 5×1000×100

92

Multiple Machines Identical Machines

As predicted, both CPLEX and LBM begin to have difficulty finding an optimal solution in a reasonable time. In figure 4.2, the problem with CPLEX is computational time, since increases at high exponential rate that it fails to find a solution within 3600 second time limit for 3×1000×100 or higher. The increase in time for LBM is linear and much more gradual. The worsening of the percent above optimal also increases very slowing saturating at about 3-3.5%. This seems to indicate that LBM can still handle much larger problem than the current largest 5×1000×100 35000

30000 25000

20000 LBM 15000 CPLEX CPU Time(s) 10000 5000 0 0 2000 4000 6000 8000 Problem Size

Figure 4.2 Average CPU Time between LBM and CPLEX for Identical Processes

Num. GRD Time %Above LBM Time %Above Optimal Time* Test Var. (UB*) (s) Optimal (UB*) (s) Optimal Value (s) A 147 74583 0.06 0.00% 74583 0.72 0.00% 74583 <1 B 153 166227 0.04 10.14% 150921 0.7 0.00% 150921 <1 C 147 457015 0.03 26.24% 363265 0.7 0.34% 362035 <1 D 147 804785 0.22 43.09% 562435 0.56 0.00% 562435 <1 Average 0.67 Average <1 Table 4.11 Results: Problem Size 3×20×5 93

Num. GRD Time %Above LBM Time %Above Optimal Time* Test Var. (UB*) (s) Optimal (UB*) (s) Optimal Value (s) A 765 354416 0.49 5.99% 342890 0.89 2.54% 334398 1 B 753 427180 0.37 7.35% 404460 1.08 1.64% 397944 1 C 768 808468 0.39 14.68% 711546 1.51 0.93% 705006 1 D 780 2670858 0.39 32.05% 2040288 2.07 0.88% 2022558 1 Average 1.39 Average 1 Table 4.12 Results: Problem Size 3×100×30

Num. GRD Time %Above LBM Time %Above Optimal Time* Test Var. (UB*) (s) Optimal (UB*) (s) Optimal Value (s) A 1016 750408 0.35 4.26% 730156 3.55 1.45% 719714 14 B 1000 800172 0.51 11.03% 743584 2.45 3.17% 720704 2 C 1036 483766 0.36 33.38% 367529 2.82 1.33% 362689 5 D 1032 3201000 0.33 42.56% 2254165 2.76 0.39% 2245420 2 Average 2.90 Average 5.75 Table 4.13 Results: Problem Size 4×100×30

Num. GRD Time %Above LBM Time %Above Optimal Time* Test Var. (UB*) (s) Optimal (UB*) (s) Optimal Value (s) A 1290 343119 0.38 6.39% 327918 1.57 1.68% 322509 327 B 1255 552273 0.67 7.41% 531852 1.63 3.44% 514185 9 C 1250 1724710 0.46 19.81% 1478950 4.16 2.74% 1439525 14 D 1250 666289 0.6 48.58% 455897 3.94 1.66% 448443 3 Average 2.83 Average 88.25 Table 4.14 Results: Problem Size 5×100×30

94

Num. GRD Time %Above LBM Time %Above Optimal Time* Test Var. (UB*) (s) Optimal (UB*) (s) Optimal Value (s) A 7053 6751990 3.22 4.31% 6597954 34.2 1.93% 6473000 26610 B 7161 4147714 2.90 8.95% 3863334 68.76 1.48% 3807000 30505 C 7086 14097332 3.13 26.21% 11239736 68.39 0.62% 11170000 33118 D 7119 17524215 3.12 34.19% 13128300 72.31 0.53% 13058800 24787 Average 60.92 Average 28755 Table 4.15 Results: Problem Size 3×1000×100

Num. GRD Time %Above LBM Time %Above Optimal Time* Test Var. (UB*) (s) Optimal (UB*) (s) Optimal Value (s) A 9376 6593784 3.41 2.87% 6460048 42.42 0.78% 6409820 3600b B 9424 8975440 3.52 4.61% 8670705 180.26 3.47% 8379600 3600b C 9368 6246798 3.52 23.92% 5066574 177.79 0.51% 5041000 3600b D 9456 16577643 3.44 43.59% 11609613 180.37 0.56% 11545000 3600b Average 145.21 Average 3600b Table 4.16 Results: Problem Size 4×1000×100

3600b means critical CPLEX time is limited by solving with no reasonable time.

Num. GRD Time %Above LBM Time %Above Optimal Time* Test Var. (UB*) (s) Optimal (UB*) (s) Optimal Value (s) A 11625 3509733 3.81 2.96% 3424900 28.05 0.48% 3408700 3600b B 11820 5739144 3.94 7.32% 5428002 336.39 2.30% 5305841 3600b C 11860 14963040 3.95 23.60% 12207655 330.24 0.84% 12105760 3600b D 11895 4778960 3.90 35.62% 3561022 338.39 1.05% 3523890 3600b Average 268.61 Average 3600b Table 4.17 Results: Problem Size 5×1000×100

95

Multiple Machines with Unidentical/Identical Machines

This is a mixed identical/unidentical case. It is not surprising that the performance of both CPLEX and LBM lies between the two extreme cases both in terms of speed and solution quality. CPLEX find optimal solution in reasonable time in all problem size up to 3×1000×100. It also exhibit sharp increase rate in time as the problem size increases beyond 3×1000×100. LBM on the other is capable of producing very near-optimal solution, generally less than 2.5-3% over optimal, in time 97seconds (for the largest,

5×1000×100, problem where CPLEX cannot handle). As before the increase in time is at a slow linear rate (see Figure 4.3).

1200

1000 LBM 800 CPLEX

600 400

CPU Time(s) 200 0 0 2000 4000 6000 8000

Problem Size

Figure 4.3 Average CPU Time between LBM and CPLEX for Unidentical/Identical Processes

96

Num. GRD Time %Above LBM Time %Above Optimal Time* Test Var. (UB*) (s) Optimal (UB*) (s) Optimal Value (s) A 147 171494 0.02 0.00% 171494 0.5 0.00% 171494 <1 B 147 373584 0.22 16.33% 325287 0.55 1.29% 321152 <1 C 147 885614 0.28 20.65% 736969 0.57 0.40% 734038 <1 D 153 2379064 0.23 58.31% 1502742 0.56 0.00% 1502742 <1 Average 0.54 Average <1 Table 4.18 Results: Problem Size 3×20×5

Num. GRD Time %Above LBM Time %Above Optimal Time* Test Var. (UB*) (s) Optimal (UB*) (s) Optimal Value (s) A 780 1095549 0.54 4.48% 1048544 0.84 0.00% 1048544 <1 B 771 1930126 0.45 12.71% 1712438 0.85 0.00% 1712438 <1 C 762 4229853 0.56 12.94% 3758187 0.94 0.34% 3745302 <1 D 762 5329814 0.49 18.45% 4499766 0.89 0.00% 4499766 <1 Average 0.88 Average <1 Table 4.19 Results: Problem Size 3×100×30

Num. GRD Time %Above LBM Time %Above Optimal Time* Test Var. (UB*) (s) Optimal (UB*) (s) Optimal Value (s) A 1032 1413995 0.56 5.32% 1357097 1.33 1.08% 1342590 4 B 1016 1709781 0.56 8.01% 1632049 1.53 3.10% 1582920 3 C 1004 3249097 0.5 17.42% 2793247 1.37 0.95% 2767080 2 D 1040 5902354 0.72 27.41% 4654952 1.45 0.48% 4632644 1 Average 1.42 Average 2.5 Table 4.20 Results: Problem Size 4×100×30

97

Num. GRD Time %Above LBM Time %Above Optimal Time* Test Var. (UB*) (s) Optimal (UB*) (s) Optimal Value (s) A 1250 1106209 0.52 4.08% 1075033 1.28 1.14% 1062878 7 B 1290 1492625 0.59 6.55% 1416320 1.29 1.11% 1400820 9 C 1280 3766484 0.55 16.00% 3282368 1.33 1.09% 3246860 4 D 1280 6101987 0.53 26.53% 4889617 1.71 1.39% 4822710 5 Average 1.40 Average 6.25 Table 4.21 Results: Problem Size 5×100×30

Num. GRD Time %Above LBM Time %Above Optimal Time* Test Var. (UB*) (s) Optimal (UB*) (s) Optimal Value (s) A 7041 10379649 3.02 3.29% 10164258 15.27 1.15% 10048900 457 B 7068 13737089 2.16 5.99% 13278823 16.48 2.46% 12960200 945 C 6942 28445855 2.4 15.90% 24823823 33.75 1.15% 24542600 1297 D 7089 49042935 2.5 26.27% 39003354 31.76 0.42% 38839300 1812 Average 24.35 Average 1127.75 Table 4.22 Results: Problem Size 3×1000×100

Num. GRD Time %Above LBM Time %Above Optimal Time* Test Var. (UB*) (s) Optimal (UB*) (s) Optimal Value (s) A 9448 10962186 3.07 3.94% 10731127 20.93 1.75% 10547000 3600b B 9312 12175889 2.94 4.98% 11936041 92.23 2.91% 11598000 3600b C 9448 24569861 2.92 16.40% 21392538 100.89 1.35% 21108000 3600b D 9552 44426396 2.89 24.79% 35766890 105.47 0.47% 35600000 3600b Average 79.88 Average 3600b Table 4.23 Results: Problem Size 4×1000×100

98

Num. GRD Time %Above LBM Time %Above Optimal Time* Test Var. (UB*) (s) Optimal (UB*) (s) Optimal Value (s) A 11660 10358119 3.29 2.94% 10208243 29.25 1.45% 10062000 3600b B 11785 12498957 3.37 5.02% 12419347 32.02 4.36% 11901000 3600b C 11775 27113482 3.19 16.46% 23978392 157.23 3.00% 23281000 3600b D 11735 44485050 3.25 22.84% 36613134 170.22 1.10% 36214000 3600b Average 97.18 Average 3600b Table 4.24 Results: Problem Size 5×1000×100

b 3600 means critical CPLEX time is limited by solving with no reasonable time.

More Results: LBM vs. CPLEX for Identical Machines

To do further performance testing of LBM, we take the most difficult case—

Identical multiple machines—and do more test against CPLEX. Our test here is to compare solution quality produced by the two methods during the same “equivalent” time period. Here are two strategy used. First, since for all test problems performed in this research with size up to 5×1000×100, LBM can find near-optimal or optimal solution in time within 300 second, we will first use LBM to solve large test problems generated for this particular test, and note the time LBM takes to reach its solution. Call this “LBM time”. The duality gap of these solution are computed using a solution of LP relaxation as the lower bound. Then the same test problems are solved by CPLEX. For each test problem, CPLEX is terminated after LBM time has elapsed. Whatever solution produced

CPLEX at the LBM time is used to compute duality gap (using the same lower bound as above.) and compared to the corresponding one produced by LBM. Another test is also performed. It is well known that CPLEX draw much of its speed in solving IP from it state-of-the-art preprocessing libraries. In fact it spends considerable time during this 99 preprocessing before actually entering into the actual LP/IP optimizer. We would like to know how well LBM competes against CPLEX’s preprocessing intelligence in solving this problem. So we stop CPLEX right after “preprocessing time” and compare the quality of the CPLEX solution obtained at that point through duality as before. We generate test problems of all types--A, B, C, D—with sizes 3×1000×100; 4×1000×100;

5×1000×100 for this test. The results are shown in Tables 4.25, 4.26, and 4.27. In all cases except three cases, the % duality gaps at LBM time of CPLEX are slightly better than those of LBM, and in all cases, CPLEX’s % duality gaps at “Preprocessing time” are worst than LBM. From this straight comparison, LBM is competitive with CPLEX in solving PCB1. If we take into account, that the C++ platform on which CPLEX is at least two time faster than the MATLAB platform [62], we may even postulate that LBM performs better than CPLEX in this type of test.

LBM CPLEX Preprocessing Test No. GAP CPU LBM time GAP time CPU GAP Type Var. UB % (s) UB % UB (s) % A 11625 3424900 3.8 28.05 3548444 7.09 3498700 120 5.68 B 11820 5428002 6.0 336.39 5312484 3.63 5483490 120 6.64 C 11860 12207655 4.2 330.24 12318000 4.67 12352000 110 5.00 D 11895 3561022 2.8 338.39 3529896 1.83 3581215 120 3.24 Table 4.25 Results: Problem Size 5×1000×100

LBM CPLEX Preprocessing Test No. GAP CPU LBM time GAP time CPU GAP Type Var. UB % (s) UB % UB (s) % A 9376 6460048 3.3% 42.42 6423610 2.60% 6423610 40 2.60% B 9424 8670705 4.6% 180.26 8472455 2.19% 8792560 45 5.23% C 9368 5066574 2.9% 177.79 5052348 2.48% 5113586 35 3.65% D 9456 11609613 2.0% 180.37 11673000 2.49% 12042000 60 5.48% Table 4.26 Results: Problem Size 4×1000×100 100

LBM CPLEX Preprocessing Test No. GAP CPU LBM time GAP time CPU GAP Type Var. UB % (s) UB % UB (s) % A 7053 6597954 2.8% 34.2 6510664 1.43% 6611968 10 2.94% B 7161 3863334 2.7% 68.76 3825916 1.63% 3877514 10 2.93% C 7086 11239736 1.5% 68.39 11212000 1.10% 11277000 10 1.67% D 7119 13128300 1.0% 72.31 13095000 0.76% 13149000 10 1.17% Table 4.27 Results: Problem Size 3×1000×100

4.2.2 Sensitivity and Robustness Tests

We will now test how well LBM works under a wide variety of problem conditions. Along the way we will see whether there is any parameter or aspect of the problem that affects the performance of the algorithm. Again tests and results are classified into classes characterized by combinations of identical/unidentical multiple machines. The performance measure used is percentage of duality gap

⎛⎞upper bound− lower bound ⎜⎟100%. We test all cases with problem types A, B, C, and ⎝⎠lower bound

D. For each problem type, we have seven sized problems (I×J×K) including 3×20×5,

3×100×30, 4×100×30, 5×100×30, 3×1000×100, 4×1000×100, and 5×1000×100.

Each sized problem, each problem type is replicated 5 times, and the minimum, average, and maximum % duality gap are computed. In % duality gap above, the lower bound is obtained by linear programming relaxation. The upper bound is obtained from using the algorithms to be compared—name the Greedy Board (GRD) and LBM heuristics + Greedy Board and LBM heuristics + Greedy component. Both LBM heuristics + Greedy algorithms pick the best primal feasible they can find, and the results are compared. The better of the two is then selected for use as the upper bound, and 101 hence % duality gap representing the LBM algorithm. The results are illustrated in Table forms as well as graphically.

Results and Analysis for Unidentical Multiple Machines

Results for Problem Type A

Problem LBM, % Duality gap GRD, % Duaity gap Size Min Avg Max Min Avg Max 3-20-5 0.0 3.0 5.8 1.7 7.2 10.0 3-100-30 0.1 2.4 7.9 6.9 8.9 9.5 4-100-30 0.0 2.2 4.3 8.4 9.5 10.0 5-100-30 1.1 3.2 4.9 17.8 20.8 26.3 3-1000-100 0.0 0.1 0.2 15.8 16.9 17.9 4-1000-100 0.4 1.1 1.9 14.6 15.9 17.5 5-1000-100 1.0 1.5 2.0 22.0 25.4 27.4 Table 4.28 Duality Gap between LBM and GRD for Problem Type A

102

30

25

20

15

% Duality Gap % Duality LBM 10 GRD

5

0 3-20-5 3-100-30 4-100-30 5-100-30 3-1000-100 4-1000-100 5-1000-100 Problem Size

Figure 4.4 Average Duality Gap between LBM and GRD for Problem Type A

Results for Problem Type B

Problem LBM GRD Size Minimum Average Maximum Minimum Average Maximum 3-20-5 0.0 3.2 10.1 8.6 11.3 12.5 3-100-30 0.2 1.7 3.7 7.1 11.1 13.9 4-100-30 0.3 2.4 5.4 11.2 11.6 12.4 5-100-30 1.4 3.6 6.6 20.7 24.0 27.8 3-1000-100 0.0 0.3 0.9 15.1 20.1 22.5 4-1000-100 0.1 1.5 4.0 20.5 22.4 23.8 5-1000-100 0.7 2.2 4.2 19.8 26.6 30.5 Table 4.29 Duality Gap between LBM and GRD for Problem Type B

103

30

25

20

15

LBM

% Duality Gap % Duality GRD 10

5

0 3-20-5 3-100-30 4-100-30 5-100-30 3-1000-100 4-1000-100 5-1000-100 Problem Size

Figure 4.5 Average Duality Gap between LBM and GRD for Problem Type B

Results for Problem Type C

Problem LBM GRD Size Minimum Average Maximum Minimum Average Maximum 3-20-5 0.0 6.1 14.5 21.0 28.2 33.7 3-100-30 0.0 0.8 2.4 19.1 25.3 33.4 4-100-30 0.1 0.7 1.6 21.8 25.6 30.2 5-100-30 0.8 1.6 2.3 25.7 39.2 52.9 3-1000-100 0.0 0.1 0.3 20.3 32.3 44.8 4-1000-100 0.0 0.3 0.8 32.3 35.0 41.7 5-1000-100 0.1 1.0 2.5 28.5 40.8 56.2 Table 4.30 Duality Gap between LBM and GRD for Problem Type C 104

45

40

35

30

25

20 % Duality Gap % Duality 15 LBM GRD

10

5

0 3-20-5 3-100-30 4-100-30 5-100-30 3-1000-100 4-1000-100 5-1000-100 Problem Size

Figure 4.6 Average Duality Gap between LBM and GRD for Problem Type C

Results for Problem Type D

Problem LBM GRD Size Minimum Average Maximum Minimum Average Maximum 3-20-5 0.0 0.7 2.0 24.3 36.7 51.7 3-100-30 0.0 0.7 2.2 19.7 37.6 66.2 4-100-30 0.0 0.5 2.2 22.6 31.4 48.4 5-100-30 0.0 0.3 0.4 31.8 36.0 40.0 3-1000-100 0.0 0.2 0.6 27.4 30.0 33.4 4-1000-100 0.1 0.9 3.7 31.8 40.3 52.4 5-1000-100 0.0 0.8 1.9 35.4 39.5 42.0 Table 4.31 Duality Gap between LBM and GRD for Problem Type D 105

45

40

35

30

25

20 % Duality Gap Duality % 15 LBM GRD

10

5

0 3-20-5 3-100-30 4-100-30 5-100-30 3-1000-100 4-1000-100 5-1000-100 Problem Size

Figure 4.7 Average Duality Gap between LBM and GRD for Problem Type D

Average Percent Duality Gap of Result Type A, B, C and D

Problem Size Type A Type B Type C Type D 3-20-5 3.0 3.2 6.1 0.7 3-100-30 2.4 1.7 0.8 0.7 4-100-30 2.2 2.4 0.7 0.5 5-100-30 3.2 3.6 1.6 0.3 3-1000-100 0.1 0.3 0.1 0.2 4-1000-100 1.1 1.5 0.3 0.9 5-1000-100 1.5 2.2 1.0 0.8 Average 1.9 2.1 1.5 0.6 Table 4.32 Average Duality Gap for All Problem Types with Different Sized Problem 106

1.6

1.4 Type A Type B 1.2 Type C Type D 1

0.8

% Duality Gap 0.6

0.4

0.2

0 3-20-5 3-100-30 4-100-30 5-100-30 3-1000-100 4-1000-100 5-1000-100 Problem Size

Figure 4.8 Comparing Average Duality Gap between Problem Types and Sized Problem

Conclusion:

For problem type B, C, and D, the minimum, average, and maximum duality gap of

LBM shown in this section were generated by solving LBM heuristics + Greedy component and problem space search. For problem type A, they were generated by solving LBM heuristics + Greedy board and problem space search, although, both LBM algorithm give similar duality gap for problem type A, and both are much better solution than GRD. Tables 4.28-4.31 show that LBM produce the optimal solutions directly since all corresponding minimum duality gaps are zero. But GRD has no zero duality gap.

Moreover, LBM gives very low average and maximum duality gaps. From Figures 4.4-

4.7, GRD consistently produces worse duality gaps as the manual machine becomes 107 increasingly less efficient than the automatic machines. Conversely, LBM produces increasingly better results (smaller and smaller duality gaps) as the manual becomes increasingly worse. Those imply if the manual insertion machine has very high processing and set up cost (time), it is more efficient solving problem with unidentical machine case. The average LBM results in Table 4.32 and Figure 4.8 indicate the following strengths of the LBM:

• Problem size is significant to the efficiency of the LBM algorithm for solving

problem type A, B, and C (especially, medium sized problem)

• Problem size is not significant to the efficiency of the LBM algorithm for solving

problem type D (high performance in automatic insertion machine)

• The higher the performance of automatic insertion machine, the more efficient the

LBM is in solving PCB1.

Results and Analysis for Identical Multiple Machines

Results for Problem Type A

Problem LBM GRD Size Minimum Average Maximum Minimum Average Maximum 3-20-5 0.0 4.5 7.1 0.0 6.0 9.8 3-100-30 2.0 2.6 3.6 5.3 6.7 7.8 4-100-30 2.1 3.8 5.6 5.0 6.7 8.2 5-100-30 3.1 4.5 6.6 6.1 8.5 9.7 3-1000-100 2.8 3.1 3.5 4.9 5.8 6.7 4-1000-100 3.3 4.3 5.7 5.4 6.7 8.6 5-1000-100 3.8 5.0 6.1 6.4 7.3 9.3 Table 4.33 Duality Gap between LBM and GRD for Problem Type A 108

9

8

7

6

5

4 % Duality Gap % Duality 3

2 LBM

1 GRD 0 3-20-5 3-100-30 4-100-30 5-100-30 3-1000-100 4-1000-100 5-1000-100 Problem size

Figure 4.9 Average Duality Gap between LBM and GRD for Problem Type A

Results for Problem Type B

Problem LBM GRD Size Minimum Average Maximum Minimum Average Maximum 3-20-5 3.7 7.2 10.9 10.9 13.0 14.2 3-100-30 1.7 3.8 4.9 3.7 7.9 11.3 4-100-30 3.7 5.2 6.2 8.6 10.8 12.1 5-100-30 4.8 6.7 10.2 8.7 11.1 13.7 3-1000-100 2.7 3.0 3.2 9.7 10.7 11.9 4-1000-100 4.6 5.0 5.3 8.3 10.3 11.7 5-1000-100 6.0 6.2 6.5 9.9 11.0 12.1 Table 4.34 Duality Gap between LBM and GRD for Problem Type B

109

14

12

10

8

6 % Duality Gap % Duality

4 LBM

2 GRD

0 3-20-5 3-100-30 4-100-30 5-100-30 3-1000-100 4-1000-100 5-1000-100 Problem size

Figure 4.10 Average Duality Gap between LBM and GRD for Problem Type B

Results for Problem Type C

Problem LBM GRD Size Minimum Average Maximum Minimum Average Maximum 3-20-5 1.7 4.8 11.1 12.6 21.9 28.0 3-100-30 1.2 1.9 2.7 12.7 14.0 15.0 4-100-30 2.3 3.7 6.4 25.0 29.1 34.7 5-100-30 3.5 5.9 10.5 20.7 29.1 34.8 3-1000-100 1.5 1.8 2.4 23.9 27.1 28.8 4-1000-100 2.9 3.1 3.3 25.4 26.6 27.7 5-1000-100 4.2 4.6 5.0 24.9 28.9 34.3 Table 4.35 Duality Gap between LBM and GRD for Problem Type C 110

35

30

25

20 LBM

15 GRD % Duality Gap % Duality

10

5

0 3-20-5 3-100-30 4-100-30 5-100-30 3-1000-100 4-1000-100 5-1000-100 Problem size

Figure 4.11 Average Duality Gap between LBM and GRD for Problem Type C

Results for Problem Type D

Problem LBM GRD Size Minimum Average Maximum Minimum Average Maximum 3-20-5 0.0 2.5 8.5 27.9 38.1 45.8 3-100-30 1.0 1.4 2.2 23.9 32.7 39.6 4-100-30 1.7 2.6 3.8 37.1 40.9 44.4 5-100-30 2.2 3.1 4.3 33.1 46.0 64.3 3-1000-100 1.0 1.1 1.5 34.9 39.0 43.5 4-1000-100 2.0 2.1 2.2 39.7 44.1 49.9 5-1000-100 2.8 2.9 3.1 36.7 43.2 52.1 Table 4.36 Duality Gap between LBM and GRD for Problem Type D 111

50

45

40

35

30

25 LBM 20 % Duality Gap Duality % GRD 15

10

5

0 3-20-5 3-100-30 4-100-30 5-100-30 3-1000-100 4-1000-100 5-1000-100 Problem size

Figure 4.12 Average Duality Gap between LBM and GRD for Problem Type D

Average Percent Duality Gap of Result Type A, B, C and D

Problem Size Type A Type B Type C Type D 3-20-5 4.5 7.2 4.8 2.5 3-100-30 2.6 3.8 1.9 1.4 4-100-30 3.8 5.2 3.7 2.6 5-100-30 4.5 6.7 5.9 3.1 3-1000-100 3.1 3.0 1.8 1.1 4-1000-100 4.3 5.0 3.1 2.1 5-1000-100 5.0 6.2 4.6 2.9 Average 4.0 5.3 3.7 2.3 Table 4.37 Average Duality Gap for All Problem Types with Different Sized Problem

112

Type A 8.0 Type B Type C 7.0 Type D

6.0

5.0

4.0 % Duality % Duality Gap 3.0

2.0

1.0

0.0 3-20-5 3-100-30 4-100-30 5-100-30 3-1000-100 4-1000-100 5-1000-100 Problem Size

Figure 4.13 Comparing Average Duality Gap between Problem Types and Sized Problem

Conclusion:

For problem type A, solving LBM heuristics + Greedy board and problem space search yields the best minimum, average, and maximum duality gap, and for problem type B, C, and D they are obtained by solving LBM heuristics + Greedy component and problem space search. Again, both LBM algorithm produce better solutions than GRD.

Tables 4.33 and 4.36 show that LBM produces optimal solutions because their minimum duality gap are zero. GRD has no zero duality gap. Other tables also provided solutions close to optimal because the corresponding duality gaps are very small. Moreover, LBM also give very low average and maximum duality gap. From Figures 4.9-4.12, GRD consistently produces worse duality gaps as the manual machine becomes increasingly 113 less efficient than the automatic machines. Conversely, LBM produces increasingly better results (smaller and smaller duality gaps) as the manual becomes increasingly worse.

Those imply if the manual insertion machine has very high processing and set up cost

(time), it is more efficient solving problem with identical machine case. The average

LBM results in Table 4.37 and Figure 4.13 indicate the following strengths of the LBM:

• Problem size is not too significant to the efficiency of the LBM algorithm for

solving problem type A, B, C and D

• The higher the performance of automatic insertion machine, the more efficient the

LBM is in solving PCB1.

Results and Analysis for Unidentical/Identical Multiple Machines

Results for Problem Type A

Problem LBM GRD Size Minimum Average Maximum Minimum Average Maximum 3-20-5 5.4 6.3 8.1 5.4 8.1 14.2 3-100-30 0.0 1.0 1.7 4.0 5.0 7.8 4-100-30 2.4 4.0 5.2 4.9 6.7 10.1 5-100-30 1.7 6.4 9.3 4.6 8.3 10.9 3-1000-100 1.7 2.1 2.3 2.6 3.7 4.9 4-1000-100 3.1 3.4 3.8 3.9 5.0 5.7 5-1000-100 2.8 3.4 4.3 4.5 4.8 5.3 Table 4.38 Duality Gap between LBM and GRD for Problem Type A

114

9.00 LBM GRD 8.00

7.00

6.00

5.00

4.00 % Duality Gap % Duality 3.00

2.00

1.00

0.00 3-20-5 3-100-30 4-100-30 5-100-30 3-1000-100 4-1000-100 5-1000-100 Problem Size

Figure 4.14 Average Duality Gap between LBM and GRD for Problem Type A

Results for Problem Type B

Problem LBM GRD Size Minimum Average Maximum Minimum Average Maximum 3-20-5 3.1 8.9 13.1 12.0 14.5 18.4 3-100-30 0.0 3.2 5.8 5.8 8.5 12.7 4-100-30 3.9 5.7 7.9 7.1 8.4 9.9 5-100-30 1.6 7.2 12.4 7.1 10.8 14.4 3-1000-100 3.1 4.0 4.9 6.7 7.4 8.9 4-1000-100 5.0 6.5 7.7 6.4 8.5 9.5 5-1000-100 7.2 7.9 8.1 7.8 8.7 9.8 Table 4.39 Duality Gap between LBM and GRD for Problem Type B

115

16.00 LBM GRD 14.00

12.00

10.00

8.00

6.00 % Duality Gap % Duality

4.00

2.00

0.00 3-20-5 3-100-30 4-100-30 5-100-30 3-1000-100 4-1000-100 5-1000-100 Problem Size

Figure 4.15 Average Duality Gap between LBM and GRD for Problem Type B

Results for Problem Type C

Problem LBM GRD Size Minimum Average Maximum Minimum Average Maximum 3-20-5 2.1 5.1 10.1 9.9 28.5 37.1 3-100-30 0.4 0.8 1.2 13.0 16.3 18.2 4-100-30 1.2 2.1 2.8 11.2 16.4 21.2 5-100-30 1.9 5.1 7.7 17.0 22.9 28.4 3-1000-100 1.7 2.0 2.6 16.5 18.5 19.8 4-1000-100 3.0 3.6 4.0 17.0 20.7 25.1 5-1000-100 4.4 4.9 5.4 18.0 21.6 23.6 Table 4.40 Duality Gap between LBM and GRD for Problem Type C

116

30.00 LBM GRD

25.00

20.00

15.00 % Duality Gap % Duality 10.00

5.00

0.00 3-20-5 3-100-30 4-100-30 5-100-30 3-1000-100 4-1000-100 5-1000-100 Problem Size

Figure 4.16 Average Duality Gap between LBM and GRD for Problem Type C

Results for Problem Type D

Problem LBM GRD Size Minimum Average Maximum Minimum Average Maximum 3-20-5 0.1 3.7 8.6 22.2 39.4 58.4 3-100-30 0.0 1.6 3.0 12.9 15.6 18.4 4-100-30 0.8 1.3 1.8 25.4 27.5 29.8 5-100-30 1.8 2.5 3.9 26.4 30.5 33.8 3-1000-100 0.7 1.0 1.2 21.1 28.0 37.2 4-1000-100 1.5 1.8 2.1 24.0 26.7 29.4 5-1000-100 2.0 2.9 3.3 23.9 26.6 31.4 Table 4.41 Duality Gap between LBM and GRD for Problem Type D

117

45.00 LBM GRD 40.00

35.00

30.00

25.00

20.00 % Duality Gap % Duality 15.00

10.00

5.00

0.00 3-20-5 3-100-30 4-100-30 5-100-30 3-1000-100 4-1000-100 5-1000-100 Problem Size

Figure 4.17 Average Duality Gap between LBM and GRD for Problem Type D

Average Percent Duality Gap of Result Type A, B, C and D

Problem Size Type A Type B Type C Type D 3-20-5 8.1 8.9 5.1 3.7 3-100-30 5.0 3.2 0.8 1.6 4-100-30 6.7 5.7 2.1 1.3 5-100-30 8.3 7.2 5.1 2.5 3-1000-100 3.7 4.0 2.0 1.0 4-1000-100 5.0 6.5 3.6 1.8 5-1000-100 4.8 7.9 4.9 2.9 Average 5.94 6.21 3.37 2.11 Table 4.42 Average Duality Gap for All Problem Types with Different Sized Problem

118

10.0 Type A Type B 9.0 Type C Type D 8.0

7.0

6.0

5.0

4.0 % Duality Gap % Duality

3.0

2.0

1.0

0.0 3-20-5 3-100-30 4-100-30 5-100-30 3-1000-100 4-1000-100 5-1000-100 Problem Size

Figure 4.18 Comparing Average Duality Gap between Problem Types and Sized Problem

Conclusion

For problem type A, solving LBM heuristics + Greedy board and problem space search yields the best minimum, average, and maximum duality gap and for problem type

B, C, and D, they are obtained by solving LBM heuristics + Greedy component and problem space search. However, both LBM algorithm produce better solutions than

GRD. Tables 4.38, 4.39, and 4.41 show LBM produce optimal solutions their minimum duality gap are zero. GRD has no zero duality gap. Moreover, LBM also give very low the average and maximum duality gap. From Figures 4.14-4.17, GRD consistently produces worse duality gaps as the manual machine becomes increasingly less efficient than the automatic machines. Conversely, LBM produces increasingly better results 119

(smaller and smaller duality gaps) as the manual becomes increasingly worse. Those imply if the manual insertion machine has very high processing and set up cost (time), it is more efficient solving problem with identical machine case. The average LBM results in Table 4.42 and Figure 4.18 indicate the following strengths of the LBM:

• Problem size is not too significant to the efficiency of the LBM algorithm for

solving problem type A, B, C and D

• The higher the performance of automatic insertion machine, the more efficient the

LBM is in solving PCB1.

120

5 Actual Case Study and Results

In the previous chapter, we generated a large number of test problems with a wide variety of characteristics and properties. We used those generated test problems to perform (i) performance test—to see how well the proposed LBM algorithm stack up against the best general-purpose IP solver in terms of speed and solution quality and against the only known algorithms to solve operation assignments PCB assembly problem with multiple machines and no board splitting; (ii)sensitivity test—to see what, if any ,problem parameters that will affect the performance of LBM; and (iii) robustness test—to see how LBM perform under a wide variety of problem conditions.. Now there is one important thing to do--to see whether LBM will be able to perform in real practical situations. Fortunately, we have received a full collaboration from a PCB manufacturer in Thailand--- C.Y.Tech Co., Ltd—who has kindly provided information so that we can now perform a real and practical performance test on LBM. To protect the integrity of

C.Y. Tech, any propriety and trade sensitive information are avoided or altered.

Nevertheless, the data used to formulate the test case namely average processing time, average set up time, number of component types, number of board types, the ratio component-board pair, demand, and machine capacity, make the test case as closes to being real as possible. We truly thank C.Y. Tech for their kind assistance and full cooperation.

5.1 Introduction to C.Y.Tech Co., Ltd

We received authority to study the printed circuit board assembly problem at C.Y.

Tech Co., Ltd. that is located on Pathumtanee, Thailand. The company is a full turnkey 121

Electronic Manufacturer Service Company offering Printed Circuit Board Assembly and

Full Box-Build production. The company enjoys Boards of Investment (BOI)

Promotional Privileges and is ISO/TS 16949 Certified. The company offers a high range of services, consisting of

• Auto Insertion Technology

• Surface Mount Technology

• In-Circuit Test

• Functional Circuit Test

• Wave Solder

• Solder Touch-up

• Box Build

• Fixture and Pallet Design as needed

• Outsource Operation as needed: PCB, Metal box, , etc.

This case study is only associated with the auto insertion technology. So the other services are not studied in this research. All board types are designed and ordered by customers and then the details are reviewed by the company. All materials required are known in the form of bill of materials (BOM) and then the company checks stock and purchase materials according to the business flow chart as shown in Figure 5.1 below.

The specific flow chart for the insertion process is illustrated in Figure 5.2. Even though, there are many related processes in the production of printed circuit boards, we focus in particular on the operation assignment of automatic insertion printed circuit products, which is associated with processes (4), (5) and (20) in the process flow chart. We do not 122 consider all repairing parts between insertion processes. It is assumed to be included as part of the demand and the repair part proportion is assumed known in advance.

Figure 5.1 Business Flow Chart of C.Y.Tech Co., Ltd. [63] 123

Figure 5.2 Process Flow Chart of Auto Insertion Technology 124

5.2 Prepared Data and Process Information

Demand Data and Product Description

The following is a sample of order and product description from a customer focusing exclusively on auto insertion technology products. There are 80 board types and 1,410 component types with a combined BOM. An example of layout and BOM of printed circuit board model is shown in Figure 5.3 and Table 5.1 and 5.2. After a careful analysis, the number of board types actually required by order is only 56 board types. For component types, there are a number of special component types that only need to be assigned by some other processes. We focus only on component types that can be inserted by both automatic and manual insertion processes. Therefore, we now have 723 component types and 56 board types as inputs to the model. A component type may be used more than one board type, so the commonality ratio---component-board pair ratio-- is 2.38. Consequently we have a model with 10,004 decision variables.

Figure 5.3 Layout of Printed Circuit Board Model RVD-164 125

Item Description Package Quantity

1N4148 75V/200mA 1 64 diode_l5_b2_DIA0.8_3m

2 78L05 Voltage REG. 16

LED HOLDER spec : LED HOLDER, ROUND 2 PIN 3MM; Colour:White; 3 1 Diameter, External:4.2mm; LED / lamp size:3mm; Length / Height, external:13mm

4 Diode bridge B250-C1500 1

5 BNC connector LIGE, FOR PCB 80

ELEKTROLYT 1uF/50V, RADIAL 6 32 D4_F1.5_DIA0.8

7 electrolytic 2200uF/25V 2

DIP- 2 POL 8 16 l10_b7_dia0.8_dil04_sw

9 RF Chokes, HF 10 uH 2

10 EMC T-FILTER 10nF/100V 4

Connector 1x3 POL HAN LIGE, HEA 11 1 spec : HEADER, 1 ROW VERT 3WAY

Connector 1x6 POL HAN LIGE, HEA 12 1 spec : HEADER, 1 ROW VERT 3WAY

Connector 2x7 POL HUN LIGE, HEA 13 spec:2.54mm (.100") Pitch C-Grid III™ PC Board Connector, Dual 4 Row, Vertical, 14 Circuits

14 Capacitor FILM 1uF/63V POLY. 16

Table 5.1 Bill of Material of Printed Circuit Board Model RVD-164 126

Item Description Package Quantity

Fixed Carbon 75R 1/4W, KULSTOFMODSTAND CB 15 75R, RES_L7_B2.5_DIA0.8_4m 80 spec : carbon film

16 Capacitor ceramic 10nF 2M, cap_l8_b2.5_dia0.8_2m 64

17 Capacitor ceramic 68pF NPO 2M 16

18 Capacitor ceramic 39pF NPO 2M 16

19 LM6181 VIDEO OP-AMP 16

20 LM79L05 Voltage Reg. National 16

Led yellow GUL ?3, DIFUS 21 1 led_d5_f2

Capacitor Polyester 100nF 5% 63V 22 32 CAP_L8_B2.5_DIA0.8_2M

23 PTC Resistor , Ih 750mA, PTC_L10.4_B3.1_DIA0.7_2M 2

24 PTC Resistor , Ih 900mA, PTC_L7.4_B3.0_DIA0.7_2M 2

25 Resistor SFR 16T 120R, RES_L5_B2_DIA0.8_3M 16

26 Resistor SFR 16T 1K, res_l5_b2_dia0.8_3m 17

27 Resistor SFR 16T 220R,res_l5_b2_dia0.8_3m 32

28 MODSTAND SFR 16T 1K1 16

29 Resistor SFR 16T 47K,RES_L5_B2_DIA0.8_3M 16

30 Part number label model RVD-164 1

31 RVD-164, R?PRINT V:0.0 (LVD) 1

Table 5.2 (Continue) Bill of Material of Printed Circuit Board Model RVD-164 127

The actual and forecast demand for 56 board types are provided from production planning, which has one actual demand in October and two forecasted demand in

November and December.

Process Information

There are three automatic insertion machines and one manual insertion process

(providing 20 identical processes). There are two identical automatic insertion machines which are Axial 6292B as shown in Figure 5.4. The other is Redial VC-5B as shown in

Figure 5.5. Both Axial inserters are required to do component sequence in feeder by the sequencer machine. On the order hand, the Radial machine is not required to do component sequencing. There are two types of sequencer machine: 1) sequencer R2596

B with 80 stations and 2) sequencer R2596 K with 60 stations. These are shown in Figure

5.6.

Figure 5.4 Axial Inserter 6292 VCD-DH6 Dual Head by Universal 128

Figure 5.5 Radial Inserter VC-5B by TDK

Figure 5.6 Axial Sequencer Machine by Universal

The Radial machine capacity is limited to 100 component types and an Axial machine has dual head and feeder with the sequence provided by an Axial sequencer. Therefore, the component type capacity of an Axial machine depends on the Axial sequencer capacity. The sequencers used have different component type capacities. One sequencer machine is only assigned to one Axial insertion machine. The capacities of sequencer

R2569B and sequencer R2569K are 80 and 60 component types respectively. 129

Processing and Setup Time Data

Processing and setup times are very important parameters as inputs to the model as they define the total production time in the objective function. The maximum speed

(processing and setup times) of the automatic insertion machines as provided by their manual books are 0.022 second for processing time of the Axial and 0.045 second for the

Radial. However, the actual processing times in practice are not quite the same as those numbers. but these may be not real practical performance. We estimated processing and setup time in practice for both automatic and manual insertion processes by using work study and time motion. Clearly, both Axial inserters are identical and they are the fastest automatic insertion machine with dual head inserter for each individual feeder. But they are also required to take a sequence by a sequencer machine. On the other hand, the

Radial inserter with single head inserter is slower but does require a sequencer machine.

The final process is the manual insertion process. Although it is actually composed of 20 identical processes, we lump them into one manual insertion process. The average processing and setup time of the manual insertion process is the average of total processing and setup times of the 20 processes. This is estimated to be approximately 10 times those of the automatic insertion process. The average processing and setup times of each insertion process used in this demonstration are summarized in Table 5.3 below:

Processing Time Setup Time or Transfer Machine (0.1 second) Board Time (0.1 second) Axial 6292B(sequencer R2569B) 3 30 Axial 6292B (sequencer R2569K) 3 30 Redial VC-5B 5 50 Manual Insertion Process 25 250 Table 5.3 Average Processing and Setup Time of Insertion Process 130

5.3 Existing Planning Method

We can briefly describe the present planning method used C.Y. Tech Company.

The company performs medium-short planning to assign all board and component types into the insertion processes without considering line balancing. The operation assignment planning is determined by the simple heuristics that maximum volume components are assigned to the fastest processing process. The fastest insertion machine is Axial 6292B with sequencer R2569B, the second fastest is Axial 6292B with sequencer R2569K, the third fastest is Redial VC-5B, and the slowest insertion process is the manual insertion process. We can thus summarize the current algorithm as follows:

Step 1: Initialization

• Prepare the number of each component type that is used in one board type (ηjk) from combination BOM.

• Prepare the total demand for each board over the time period (dk) • Set i =1 (Axial 6292B with sequencer R2569B), i =2 (Axial 6292B with sequencer R2569K), i =3 (Redial VC-5B), and I = 4 (a manual insertion process) • Prepare the total number of types of components that can be assigned to each

insertion process (Ni) Step 2: Component Assignment

• Compute for i=1: I * set Ji = φ * while Jii< N do

* ⎛⎞ jijkk= arg max ∑ η d * ⎜⎟ jJ∉ i ⎝⎠kK∈ *** set Jiii=∪ J j end end 131

* • Set all component type j ∈ Ji into insertion process i

Step 3: Board Assignment

* • Set all board type k that required component type j ∈ Ji into insertion process i • Stop.

5.4 Results and Performance

We solved three cases, one with actual demand and two with forecasted demands.

We do so using LBM, CPLEX 8.0, GRD and the existing method. The results for comparison are shown in Table 5.4 and Figure 5.7. We illustrate the CPU time (second) of each algorithm in each demand period in Table 5.5. The CPU time for the “optimal” solution is run on CPLEX software and the others are MATLAB time. The percent above

⎛⎞UB Solution− Optimal Solution optimal is calculated by ⎜⎟×100 and the results are ⎝⎠Optimal Solution illustrated in Table 5.6.

Solution for Each Demand Period Algorithms October, 07 November, 07 December, 07 LBM 10,954,307 21,273,532 23,549,070 OPTIMUM (CPLEX) 10,789,300 21,238,200 23,468,700 GRD 22,545,889 43,936,430 50,061,386 EXISTING 12,885,788 24,799,030 27,031,622

Table 5.4 Results of All Algorithms in Each Demand Period 132

Time(sec.) 60,000,000 LBM OPTIMUM GRD EXIST

50,000,000

40,000,000

30,000,000

20,000,000

10,000,000

- October, 07 November, 07 December, 07

Figure 5.7 Time Comparison of the four algorithms

CPU Time(second) in Each Demand Period Algorithms October, 07 November, 07 December, 07 LBM 199b 161b 193b OPTIMUM (CPLEX) 741a 258a 289a GRD 1.4 b 1.5 b 1.4 b EXISTING <1 b <1 b <1 b a CPU Time run on CPLEX, b CPU Time run on Matlab

Table 5.5 CPU Times of all Four Algorithms

Percent Above Optimal Solution Algorithms October, 07 November, 07 December, 07 LBM 1.53% 0.17% 0.34% GRD 108.97% 106.87% 113.31% EXISTING 19.43% 16.77% 15.18%

Table 5.6 Percent above Optimal for Three Algorithms

133

Solution and Percent Saving Algorithms October, 07 November, 07 December, 07 LBM 10,954,307 21,273,532 23,549,070 EXISTING 12,885,788 24,799,030 27,031,622 Percent Saving 17.63% 16.57% 14.79%

Table 5.7 Percent Saving of Production Time between LBM and Existing Method

Time(Sec.) LBM EXIST 30,000,000

25,000,000

20,000,000

15,000,000

10,000,000

5,000,000

- October, 07 November, 07 December, 07

Figure 5.8 Total Production Time: LBM v.s. Existing Method

As can be seen, CPLEX yielded the true optimum but took the longest to accomplish it.

LBM achieved very near-optimal solution with % above optimal ranging from 0.2% to

1.7%, and it did so using ¼ to ½ the time CPLEX took to find the true optimum. GRD obviously took very little time, but its solution quality is also very poor with the production time being consistently about twice those of the others. Amazing, the current 134 heuristics used by the company performs very well. It is the fastest, even faster than

GRD, and its solution quality is acceptable with % above optimal ranging from 15% to

19%.

5.5 Final Comments

Can LBM make an impact on the production planning process of C.Y. Tech?

Comparing the performance of LBM with the existing planning method used by C.Y.

Tech, we find that LBM can save the total production time by about 15% to 17% (see

Table 5.7). Is this significant? Perhaps not right now. C.Y. Tech has enough capacity to handle current demand and any slight increase in future demand. But if the demand keeps on growing in the very competitive market in South East and East Asia, C.Y Tech will probably have to increase its capacity. By using LBM, C.Y. Tech can do so at minimal capital investment.

The fact that LBM continues to provide very near-optimal solution within very reasonable computational times in real practical situations is very encouraging. The performance of LBM does not seem to be affected by changing demand situations (e.g. forecasted demands in November and December). This is a good indication that the

LBM algorithm can be used as an effective and efficient planner for the printed circuit board assembly problem.

The unique characteristics of this case study are that the commonality ratio is high, the speed difference between the automatic process and the manual process is large, and the demand is on the high side. These are the worst-case scenario for the greedy- based algorithm such as GRD. And the results in this test study confirm this. In fact we 135 have seen that a greedy-based algorithm alone consistently provides inferior solution, even though it is very fast. It should not be used by itself, particularly in a very competitive PCB industry.

General purpose optimizers such as CPLEX can be an alternative solution. But because they are general purpose optimizers, it will have to take a very sophisticated and powerful one such as CPLEX to be able to handle large industrial-sized problems satisfactory. But even CPLEX will soon run into runtime problems and it usually does not take that much increase in problem size for that to happen. The result in this case study (along with the results in the previous chapter) shows a glimpse of evidence for that. But more still, because it is rather sophisticated, it is generally expensive to acquire, install, and maintain. It is also a little more complicated to set up the model, prepare input data and use. Without any expertise in optimization and , it is probably unwise to try it as an alternative.

136

6 Conclusions and Future Work

6.1 Conclusions

The Problem Addressed:

In this research we addressed the most general case of operation assignment of printed circuit board assembly problem by considering multiple automatic/semiautomatic machines and allowing board splitting. The resulting binary integer programming model has a structure that is a combination of generalized assignment and (discrete) facility location. Some algorithms exist to deal with simplified cases such as ones with single automatic machine and/or no board splitting, but none is available for the general case considered in this research except some greedy-based heuristics. Because the general model is more representative of what is required in today’s competitive PCB industry, and because the often unsatisfactory performance of the greedy-based methods, we aimed to develop a more efficient solution method suitable for real world application.

The Solution Strategy Used:

Our strategy was to exploit the unique structure of the model to the fullest possible.

To do so we found that decomposition was the best first line of attack, because it helped isolate the various structures embedded in the problem making the resulting subproblems very easy to solve. But to be able to disaggregate the model, weakly coupling constraints in the form of capacity constraints and weakly-linked location-type constraints had to be relaxed. We did so using the standard duality-based Lagrangian relaxation. The reasons behind this were: 137

a) The resulting subproblems could be solved simply by inspection;

b) Because of the weakly coupling nature of most of the relaxed constraints, it was

expected that the subproblem solutions, already in binary form, at-or-near dual

optimum would be very close to the primal optimum itself;

c) Because of (b), finding the primal optimal solution or near-optimal solution, starting

from the binary subproblem-solutions at or near dual optimum, should require no

more than a neighborhood search, and by exploiting the weakly coupling nature of

the relaxed constraints, simple but effective heuristics should be readily available

for such a neighbor hood search (more on this later); and

d) Also because of (b), we should not have to solve the Lagrangian dual to full

optimality and any procedure that provided a decent approximation of (optimal)

multipliers would suffice. If all these steps were done well, we should not have to

rely on Branch-and-Bound or any iterative process to reach the desired solution.

Because of the reasoning in (d), we used the LP-relaxed optimal dual supplied by solving the LP relaxation of the original problem (this means one LP of the original problem size has to be solved once) to serve as the desired multipliers to form the Lagrangian relaxation at-or-near dual optimum. Alternatively we could execute a few iterations of the subgradient method to solve the Lagrangian relaxation problem to find the required multipliers. Which method to use depends on the size of the problem. For small to relative large size, we used the former; and for a very large problem, we recommend the latter. Extensive tests performed in this study have shown that this strategy, particularly

LP-based multipliers, worked quite well here. 138

On the final step involving a neighborhood search, the goal here is obvious—to reach primal feasibility from a near-by infeasible point. If the neighborhood is small but contains the primal optimum, then any primal feasible point reached should be near- optimum, if not the optimum itself. To develop effective heuristics to perform the neighborhood search, we made the following observations about the nature of the relaxed coupling constraints:

i) The majority of the constraints are pairwise constraints, pairing a variable in one

small subgroup with a variable in another small subgroup. This means a “variable

fixing” strategy should work well to un-violate the violated constraints in their

corresponding group (the goal of the neighborhood search that we are seeking).

Fixing or switching one variable should have little-to-no effect on variables in

other directly unrelated groups.

ii) The rest of the relaxed constraints are capacity constraints. The number of these

constraints is small compared to the rest (typically no more than 10 in real

applications since the number of automatic/semiautomatic machines for a PCB

company is usually no more than 10). And, within this constraint group, they are

unrelated to one another except for some weak links through other sets of

variables that do not appear in these constraints. The variables in these constraints

do not appear anywhere else in the model. Each variable appears only once in this

group of constraints and its (nonzero) coefficient is always 1. Thus, to un-violate

a violated capacity constraint, some form of greedy-based switching heuristics

should do well here, again because the effect of any variable switch is expected to

be tightly localized. 139

Based on observation (i), we developed variable fixing/switching heuristics called Lower

Bound Maintaining algorithms (LBM) to help fix infeasibility of violated constraints in the location-type group. Again tests performed in this study show that these heuristics have accomplished the job they were designed to do well in all cases tested.

Based on observation (ii), we customized greedy-type heuristics to help fix infeasibility of violated capacity constraints. In particular, we customized the well- known Greedy Board heuristics and Greedy Component heuristics for use whenever a capacity constraint is violated. Again, our extensive tests performed in this study have indicated that these two heuristics used in tandem were doing the job they were designed to do very well.

LBM algorithm used in combination with the Greedy Board/Component heuristics have proved quite successful in our test study. Of about a thousand test problems run, solutions found by the proposed method were optimal or near optimal.

The majority of these were achieved without using anything else in the final step of neighborhood search other than the above heuristics. However there were a few cases that the method failed to find a primal feasible solution. This was not the failure of the heuristics, but it was probably due to the assumption that the “neighborhood” within which we searched did not contain any primal feasible solution. So in this case we had to introduce a procedure that would “perturb” and “expand” the domain of the neighborhood. Here we employed a recently developed strategy called Problem Space

Search (PSS), which essentially perturbs and expands the domain of the search space. It is a moderately expensive procedure, and should be used only when needed. After incorporating PSS, all test problems (including an industrial-sized real-world case) were 140 solved successfully yielding optimal or near-optimal solutions. Among those few problems that PSS had to be activated, PSS needed be used no more than twice on any problem. This is a good news for two reasons. First, it confirmed our hypotheses that the method based on the decomposition strategy that we used would terminate (before the final neighborhood search) at a point not too far away from the primal optimal point that we seek. Second, the less that PSS is used, the less costly it will take to solve the problem.

Testing the Claims:

We have just described all the pieces as well as a framework to put them together to form what we believe to be an efficient algorithm to solve industrial-sized problems in operation assignment of PCB assembly with multiple machines and with board splitting.

We tested the performance of the proposed algorithm as extensively as we could by using carefully designed and generated test problems and a real world case study.

We generated almost a thousand test problems which can be classified as follows:

• 3 types of automatic types insertion process: identical, unidentical and

identical/unidentical insertion processes and costs

• 5 generation types: performance ratios of manual and automatic insertion process

with 1.5, 2, 5, and 10

• 7 problem sizes: up to 5 processes, up to 1,000 component types, and up to 100 board

types 141

Moreover, one real case study was used. The case, which was graciously provided by

C.Y. Tech Co., Ltd, had 4 processes, 723 component types, and 56 board types, in three demand periods.

The tests performed were:

Performance Test:--To see how well the proposed LBM algorithm (LBM) performed

compared with CPLEX (the arguably best LP/IP optimizer today); Greed Board

Heuristics (GRD—one of the two greedy-based heuristics which currently the

only known existing algorithm to solve the problem addressed in this research);

and in the real case study, a “Top Component-Fastest Machine” heuristics used by

C.Y. Tech

Robustness Test: To see how well LBM performed under a wide variety of problem

conditions, and

Sensitivity Test: To see whether LBM performance would be affected by any problem

parameters or aspects

Here is a summary of what we can conclude from these tests:

1. Overall, the results expressly indicate that the proposed LBM algorithm can provide

the best compromised solution to PCB Assembly industry. For all (almost) five

hundred test problems and the real case study, LBM produced optimal or near optimal

solution with % above optimal or duality gap no larger than 3.7% with computational

time within 200 second. In fact in most cases the % above optimal/duality gap was

within 2% and the computational times were within 50 second.

2. On looking at the results in more detail, the LBM heuristics + Greedy Component

combination performed best when all processes were identical, and the LBM 142

heuristics + Greedy Board combination did best if some or all processes were

unidentical. So the best strategy is to use both in tandem, namely do both and choose

the better one. This is what was used in our test study. For the case with unidentical

machines, the best LBM result was produced when the automatic machines were

much faster than the manual process (problem type D), with duality gapa ranging

from 0.0% (i.e. optimal) to 1.9% and the average duality gapa of 0.6%. For the case

with identical machines, the best LBM result was produced when the automatic

machines were much faster than the manual process (problem type D), with duality

gapa ranging from 1.5% to 3.7% and the average duality gapa of 2.3%. For the case

with mixed identical/unidentical processes, the best LBM result was produced when

the automatic machines were much faster than the manual process (problem type D),

with duality gapa ranging from 0.9% to 3.4% and the average duality gapa of 2.1%.

We note that in general the most difficult case to handle is when all processes are

identical. This is true for all methods used in the test. CPLEX which often found the

true optimal for a small to medium sized problems (up to 3×1000×100) did so at a

marked lower speed. For larger problems, CPLEX could not find any solution within

3600 second time limit for any test problem in this category. GRD performed poorly

in all multiple machines tests, with the worst being in the “identical machine”

category.

3. The computational time for LBM in solving the test problems was within 200 seconds

It was shown clearly that LBM time would increase linearly with problem size but the

rate of increase was small. With the time less than 200 second to solve the problem of

the size 5×1000×100, there is every indication that LBM would be able to handle 143

very large size industrial problem comfortably. Moreover, it should be noted that 90%

of computational time in LBM is spent in a single solve of LP relaxation (to find

approximate optimal multipliers). This was done in MATLAB using LINPROG. If

this were to be done using a faster system such as CPLEX, LBM time would have

down about 10-fold.

4. As mentioned above, for large scale problems, LBM is probably the only choice that

can produce good solutions in an acceptable time. CPLEX will not work in that range

of problems, and GRD will produce too poor a solution. For a small to medium sized

problems (up to 3×1000×100), CPLEX provided optimal solution at a much faster

speed than LBM (except at the upper end of this range where LBM is faster). In

terms of solution quality, LBM cannot compete with CPLEX in this small-to-medium

size problem. However, the CPLEX is commercial software which is quite expensive,

and not too easy to use without the necessary knowledge and skill in optimization

modeling and computer . So even in this range of problems,

LBM should still be the method of choice. Perhaps something should be said about

GRD, the only existing method known to be able to handle the type of problem

addressed in this research. Our test results showed clearly that it consistently

produced poor to unacceptable results. Of all the problems tested, the best class that

GRD could handle was when all processes including the manual process were

identical. The duality gapa ranged from 4.7% to 8.6%.with an average of 6.8%. The

results in all other cases were much worse. One can conclude that cases where GRD

might be able to give acceptable results are when all machine and process including 144

the manual process are identical or almost identical and different boards hardly have

any component in common ( commonality ratio between 1 and 1.3).

5. Finally the results on a real case: The particular characteristics of the case study were:

(i) the commonality ratio was high; (ii) the automatic machines were much faster than

the manual process; and (iii) the demand was on the high side. This was the type of

cases that GRD performed the worst. The %above optimal produced by GRD in all

three months of demand tested were all higher than 100%. The current heuristics

used by C.Y. Tech performed acceptably, producing solutions that were about 17%

higher than the true optimal in very fast times. CPLEX produced the true optimum

but took the longest time. As for the proposed LBM algorithm: because of the unique

characteristics of the case study (high commonality ratio and mixed

identical/unidentical machines), we used the LBM heuristics + Greedy Component

combination in combination with PSS. LBM achieved very near-optimal solution

with the average % above optimal of 0.68% ranging from 0.2% to 1.7%. The time

used by LBM was 25% to 50% those used by CPLEX. So overall, LBM proves to be

the best compromised choice as an effective and efficient planner for the printed

circuit board assembly problem. It consistently provides near-optimal solution within

very reasonable computational times in real practical situations. It is much faster than

CPLEX for industrial-sized problems with the solution quality not much worse than

CPLEX. It consistently produces noticeably better solution than any pure heuristics-

based method including the one currently used by C.Y. Tech. It could save C.Y. Tech

about 16% production time, which could be substantial, if C.Y. Tech was operating at 145

or near capacity. It would help C.Y. Tech expand its capacity with minimum capital

cost.

6.2 Future Work

The first obvious improvement to make on LBM is to continue to fine tune the heuristics to search for better solution without substantially increasing computational time. The place to work on is perhaps PSS. It could be made faster and much more effective. Another obvious place that would help cut down computational time is the LP solver used. A faster LP solver that can be seamlessly integrated with the rest of the algorithm would substantially improve LBM in terms of computational time.

In this research we focused on minimizing the production cost using production time as its surrogate. If other components of cost (e.g. labor and overhead) have to be included, the objective function has to be adjusted; and the method modified accordingly.

But perhaps more importantly, the LBM algorithm makes no explicit attempt to balance the workloads across processes or to maximize the throughput. If the company operates at or near capacity and the demand is high (which is the case with C.Y. Tech), then the

“minimizing makespan” objective would be very important. It should then be used in place of or in tandem with the “minimum production cost” objective. In any case, either a new method needs to be developed or LBM has to be modified to handle the minimizing makespan objective, either alone or as a bi-criteria objective function.

146

Bibliography

[1] L.F. Mcginnis, J.C. Ammons, M. Carlyle, L. Cranmer, G. W. Depuy, K.P. Ellis, C. Tovey, and H. Xu, “Automated process planning for printed circuit card assembly”, IIE Transactions, Vol. 24 (4), pp. 18-30, 1992.

[2] J.C. Ammons, M. Carlyle, L. Cranmer, G. W. Depuy, K.P. Ellis, L.F. Mcginnis, C. Tovey, and A.,H. Xu , “Component allocation to balance workload in printed circuit card assembly systems”, IIE Transaction, vol. 29, 265-275, 1997.

[3] G. Boothroyd, Assembly automation and product design, 2nd, Taylor & Francis Group, 2005.

[4] M.L. Brandeau and C.A. Billington, “Design of manufacturing cells: operation assignment in printed circuit board manufacturing”, Journal of Intelligent Manufacturing, vol. 2, pp. 95-106, 1991.

[5] E. Duman and I. Or, “The quadratic assignment problem in the context of the printed circuit board assembly process”, & , vol. 34, pp. 163-179, 2007.

[6] J. Ashayeri and W. Seten, “A planning and scheduling model for onsertion in printed circuit board assembly”, European Journal of Operations Research, vol. 183, pp. 909-925, 2007.

[7] K.P. Ellis, L.F. Mcginnis, and J. C. Ammons, “An approach for grouping circuit cards into families to minimize assembly time on a placement machine”, IEEE Transactions on Packaging Manufacturing, vol. 26 (1), pp.22-30, 2003.

[8] A. Shtub and O. Maimon, “Role of similarity measures in PCB grouping procedures”, International Journal Production Research, vol.30, pp.973-983, 1992.

[9] M.S. Daskin, O. Maimon, A. Shtub, and D. Braha, “Grouping components in printed circuit board assembly with limited component staging capacity and single card setup: problem characteristics and solution procedures” International Journal Production Research, vol.35, pp.1617-1638, 1997.

[10] M.S. Daskin, O. Maimon, A. Shtub, D. Braha, “Grouping components in printed circuit board assembly with limited component staging capacity and single card setup: problem characteristics and solution procedures”, International Journal of Production Research, vol.35 (6), pp.1617-1638, 1997

147

[11] M.S. Hillier and M.L. Brandeau, “Optimal component assignment and board grouping in printed circuit board manufacturing”, Operations Reseach, vol.46(5), pp. 675-689, 1998.

[12] V.J. Leon and B.A. Peters, “Replanning and analysis of partial setup strategies in printed circuit board assembly systems”, The International Journal of Flexible Manufacturing Systems, vol.8, pp. 398-412, 1996.

[13] S. Jain, M. E. Johnson, and F. Safai, “Implementing setup optimization on the shop floor”, Operations Research, vol. 43(6), pp. 843-851, 1996

[14] A. Balakrishnan and F. Vanderbeck, “A tactical planning model for mixed-model electronics assembly operations”, Operations Research, vol. 47(3), pp. 395-493, 1999.

[15] K. Altinkemer, B. Kazaz, M. Koksalan, and H. Moskowitz, “Optimization of printed circuit board manufacturing: Integrated modeling and algorithms”, European Journal of Operational Research, vol. 124, pp. 409-421, 2000

[16] Y. Crama, J. Klundert, and F.C.R. Spieksma, “Production planning problems in printed circuit board assembly”, Discrete Applied Mathematics, vol. 123, pp. 339- 361, 2002

[17] Y. Crama, A.W.J. Kolen and A.G. Oerlemans, “Throughput rate optimization in the automated assembly of printed circuit boards”, Annuals of Operations Research, vol. 26, pp. 455-480, 1990.

[18] M.T. Sze, P. Ji and W.B. Lee, “Component Grouping for automatic printed circuit board assembly”, International Journal Advance Manufacturing Technology, vol. 19, pp.71-77, 2002.

[19] W. Lin and V. Tardif, “Component Partitioning under demand and capacity uncertainty in printed circuit board assembly”, The International Journal of Flexible Manufacturing Systems, vol.11, pp. 159-176, 1999.

[20] M.S. Hillier and M.L.Brandeau, “Cost minimization and workload balancing in printed circuit board assembly”, IIE Transactions, vol. 33, pp.547-557, 2001.

[21] G.T. Ross and R.M. Soland, “A branch and bound algorithm for the generalized assignment problem”, Mathematic Programming, vol. 8, pp. 91-103, 1975.

[22] M.L. Fisher, R. Jaikumar, and L.N.V. Wassenhove, “A multiplier asjustment method for the generalized assignment problem”, Management Science, vol. 32(9), pp. 1095-1103, 1986.

148

[23] M. Savelsbergh, “A branch-and-price algorithm for the generalized assignment problem”, Operations Research, vol. 45(6), pp. 831-841, 1997.

[24] R.M. Nauss, “Solving the generalized assignment problem: An optimizing and heuristic approach”, INFORMS Journal on Computing, vol. 15(3),pp. 249-266, 2003.

[25] V. Jeet and E. Kutanoglu, “Lagrangian relaxation guided problem space search heuristics for generalized assignment problems”, European Journal of Operational Research, vol. 182, pp. 1039-1056, 2007.

[26] A.J. Higgins, “A dynamic tabu search for large-scale generalized assignment problems”, Computers & Operations Research, vol. 28, pp.1039-1048, 2001.

[27] M. Yagiura and T. Ibaraki, “An ejection chain approach for the generalized assignment problem”, INFORMS Journal on Computing, vol. 16(2), pp.133-151, 2004.

[28] M. Yagiura, T. Ibaraki, and F. Glover, “A path relinking approach with ejection chain for the generalized assignment problem”, European Journal of Operational Research, vol. 169, pp. 548-569, 2006.

[29] R. Malhotra, C.S. Lalitha, P. Gupta, A. Mehra and Sonia, Combinatorial Optimization: Some Aspects, Narosa Publishing House, 2007.

[30] V.L. Nrtrdmrb, “An efficient algorithm for the uncapacitated facility location problem with totally balanced matrix”, Discrete Applied Mathematics, vol. 114, pp. 13-22, 2001.

[31] J.B. Mazola and A.W. Neebe, “Lagrangian-relaxation-based solution procedures for a multiproduct capacitated facility location problem with choice of facility type”, European Journal of Operational Research, vol.115, pp.285-299, 1999.

[32] B. Korte and J. Vygen, Combinatorial Optimization: Theory and Algorithms, 3rd , Research Institute for , Springer, 2006.

[33] M.A. Osorio, F. Glover, and P. Hammer, “Cutting and surrogate constraints analysis for improved multidimensional knapsack solutions”, Annals of Operations Research, vol.117(1-4), pp. 71-93, 2002.

[34] M.A. Osorio and F. Glover, “Exploiting surrogate constraint analysis for fixing variables in both bounds for multidimensional knapsack problems”, Proceedings of the Fourth Mexican International Conference on Computer Science, 2003.

149

[35] P.L. Hammer, M.W. Padberg, and U.N. Peled, “Constraint paring in integer programming”, INFOR, vol. 13(1), pp. 68-81, 1975.

[36] F. Glover, H.D. Sherali, and Y. Lee, “Generating cuts from surrogate constraint analysis for zero-one and multiple choice programming”, Computational Optimization and Applications, vol. 8, pp. 151-172, 1997.

[37] MW.P. Savelsbergh, “Preprocessing and probing techniques for mixed integer programming problems”, ORSA Journal on Computing, vol. 6(4), pp.445-454, 1994.

[38] V. Gabrel and M. Minoux, “A scheme for exact separation of extended cover inequalities and application to multidimentional knapsack problems”, Operations Research Letters, vol. 30(4), pp. 252-264, 2003.

[39] E. Balas and E. Zemel, “Facets of the knapsack polytope form minimal covers”, SIAM Journal Apllication Mathematic, vol. 34, pp. 119-148, 1978.

[40] H. Crowder and E.L. Johnson, “Solving large-scale zero-one linear programming proglems”, Operations Research, vol. 31(5), pp. 803-834, 1983.

[41] B.L. Dietrich and L.F. Escudero, “Coefficient reduction for knapsack-like constraints in 0-1 programs with variable upper bound”, Operations Research Letters, vol. 9, pp. 9-14, 1990.

[42] K.L. Hoffman and M. Padberg, “Improving LP-Representations of zero-one linear programs for branch-and-cut”, ORSA Journal on Computing, vol. 3(2), pp. 121- 134, 1991.

[43] G.L. Nemhauser and L.A. Wolsey, Integer and combinatorial optimization, John Wiley & Sons, 1999.

[44] A.M. Geoffrion, “ Lagrangean Relaxation for integer programming”, Mathematical Programming Study, vol. 2, pp. 82-114, 1974.

[45] M.L. Fisher, W.D. Northup, and J.F. Shapiro, “Using duality to solve discrete optimization problems: Theory and computation experience”, Mathematical Programming Study, vol. 3, pp. 56-94, 1975.

[46] M. Guignard, “Lagrangean Relaxation”, Top, vol. 11(2), pp. 151-228, 2003.

[47] M. L. Fisher, “The Lagrangian relaxation method for solving intger programming problems”, Management Science, vol. 27(1), pp.1-18, 1981.

[48] A. Frangioni, “About Lagrangian Methods in Integer Optimization”, Annuals of Operation Research, vol. 139, pp. 163-169, 2005. 150

[49] V. Chankong, “Lagrangian Relaxation for Large Scale Problems”, Lecture Class: Case Western Reserve University, Electrical Engineering and Computer Science, 2005.

[50] S. Lawphongpanich, “Dynamic slope scaling procedure and Lagrangian relaxation with subproblem approximation”, Journal of Global Optimization, vol. 35, pp. 121-130, 2006.

[51] T. Larsson, M. Patriksson, and A. Stromberg, “Conditional subgradient optimization-theory and application”, European Journal of Operational Research, vol. 88, pp. 382-403, 1996.

[52] F. Fumero, “A modified subgradient algorithm for Lagrangean relaxation”, Computers & Operations Research, vol. 28, pp. 33-52, 2001.

[53] H. Wang, “An improved stepsize of the subgradient algorithm for solving the Lagrangian relaxation problem”, Computers and Electrical Engineering, vol. 29, pp. 245-249, 2003.

[54] A.M. Geoffrion, “Elements of large-scale mathematical programming: Part I”, Management Science, vol. 16(11), pp. 652-675, 1970.

[55] A.M. Geoffrion, “Duality in nonlinear programming: A simplified applications- oriented development”, SIAM Review, vol. 13(1), pp.1-37, 1971.

[56] L.S. Lasdon, Optimization theory for large systems, Dover, 2002.

[57] E.K.P. Chong and S.H. Zak, An introduction to optimization, 2nd , John Wiley & Sons, 2001.

[58] R. H. Storer, S.D. Wu, and R. Vaccari, “ New search spaces for sequencing problems with application to job shop scheduling”, Management Science, vol. 38(10), pp. 1495-1509, 1992.

[59] D. Magos, “Tabu search for the planar three-index assignment problem”, Journal of Global Optimization, vol. 8, pp. 35-48, 1996.

[60] K. Nonobe and T. Ibaraki, “A tabu search approach to the constraint satisfaction problem as a general problem solver”, European Journal of Operational Research, vol. 106, pp. 599-623, 1998.

[61] T. F. Gonzalez, Handbook of approximation algorithms and metaheuristics, Chapman & Hall/CRC, 2007.

151

[62] I. McLead and H. Yu, Timing Comparisons of Mathematic, MATLAB, R, S-Plus, C and Fortran”, www.stats.uwo.ca/faculty/aim/epubs/MatrixInverseTiming/default.htm, 2002

[63] C.Y. Tech Co., Ltd., Company Profile, www.cytech.co.th, 2007