<<

A BranchandPrice for the

Generalized

Martin Savelsb ergh

Georgia Institute of Technology

School of Industrial and Systems Engineering

Atlanta GA

USA

Abstract

The generalized assignment problem examines the maximum prot assignmentof

jobs to agents such that each job is assigned to precisely one agent sub ject to capacity

restrictions on the agents A new algorithm for the generalized assignment problem

is presented that employs b oth and branchandb ound to obtain

optimal integer solutions to a set partitioning formulation of the problem

July

Revised Novemb er

Revised Octob er

Intro duction

The Generalized Assignment Problem GAP examines the maximum prot assignment

of n jobs to m agents suchthateach job is assigned to precisely one agent sub ject to

capacity restrictions on the agents Although interesting and useful in its own right its

main imp ortance stems from the fact that it app ears as a substructure in manymodels

develop ed to solve realworld problems in areas suchasvehicle routing plant lo cation

resource scheduling and exible manufacturing systems

The GAP is easily shown to b e NPhard and a considerable b o dy of literature exists

on the search for eectiveenumeration to solve problems of a reasonable size

to optimality Ross and Soland Martello and Toth Fisher Jaikumar and Van

Wassenhove Guignard and Rosenwein Karabakal Bean and Lohmann A

recent survey by Cattrysse and Van Wassenhove provides a comprehensive treatment

of most of these metho ds

In this pap er we present an algorithm for the GAP that employs b oth column generation

and branchandb ound to obtain optimal integer solutions to a set partitioning formulation

of the problem We discuss various branching strategies that allow column generation at any

no de in the branchandb ound Therefore the algorithm can b e viewed as a branch

andprice algorithm that is similar in spirit to the branchand algorithms that allowrow

generation at any no de of the branchandb ound tree

Manyvariations of the basic algorithm have b een implemented using MINTO a Mixed

INTeger Optimizer Nemhauser Savelsb ergh and Sigismondi MINTO is a software

system that solves mixedinteger linear programs by a branchandb ound algorithm with

relaxations It also provides automatic constraint classication pre

pro cessing primal and constraint generation Moreover the user can enrichthe

basic algorithm byproviding a variety of sp ecialized application routines that can customize

MINTO to achieve maximum eciency for a problem class

This pap er is organized as follows Section intro duces b oth the standard and the set

partitioning based formulation for the GAP Section presents the basic branchandprice

algorithm and discusses issues related to column generation and branchandb ound Section

examines the various branching strategies Section covers various implementation issues

and Section describ es the computational exp eriments that have b een conducted Finally

Section examines approximation algorithms derived from the branchandprice algorithm

Formulations

In the GAP the ob jective is to nd a maximum prot assignmentofn jobs to m agents

such that each job is assigned to precisely one agent sub ject to capacity restrictions on the

agents The standard formulation is the following

X

max p x

ij ij

imj n

sub ject to

X

x j fng

ij

im

X

w x c i f mg

ij ij i

j n

x f g i fmgjf ng

ij

where p ZZ is the prot asso ciated with assigning job j to agent i w ZZ the

ij ij

claim on the capacity of agent i byjobj if it is assigned to agent i c ZZ the capacity

i

of agent i and x avariable indicating whether job j is assigned to agent i x

ij ij

or not x

ij

The formulation underlying the branchandprice algorithm discussed in this has an

exp onential number of variables and can b e viewed as a disaggregated version of the ab ove

formulation

i i i

Let K fx x x g b e the set of all p ossible feasible assignments of jobs to agent

i

k

i

i i i i

is a feasible solution to x x x i ie x

nk k k k

X

i

w x c

ij i

jk

j n

i

x f g j fng

jk

i

Let y for i f mg and k K b e a binary variable indicating whether a feasible

i

k

i i i

assignment x is selected for agent i y or not y The GAP can nowbe

k k k

formulated as

X X

i i

max p x y

ij

jk k

j n

imk k

i

sub ject to

X

i i

x y j f ng

jk k

imk k

i

X

i

i f mg y

k

k k

i

i

f g i fmgk K y

i

k

where the rst set of constraints enforces that each job is assigned to precisely one agent

and the second set of constraints enforces that at most one feasible assignment is selected

for each agent This set partitioning formulation has b een used by Cattrysse Salomon and

Van Wassenhove to develop an for the GAP

The asso ciated with agent i in the standard formulation ie

X

p x max

ij ij

j n

sub ject to

X

w x c

ij ij i

j n

x f g j f ng

ij

has b een replaced in the disaggregated formulation by

X X

i i

max p x y

ij

jk k

j n

k k

i

sub ject to

X

i

y

k

k k

i

i i

where x x are the integral solutions to the knapsack problem Because the linear

k

i

programming relaxation of a knapsack problem contains the convex hull of the integer

solutions the LP relaxation of the disaggregated formulation provides a b ound that is at

least as tight as the b ound provided by the LP relaxation of the standard formulation

Observe that the disaggregated formulation is essentially obtained by applying Dantzig

Wolfe decomp osition to the standard formulation where the knapsack constraints have

b een placed in the subproblem Consequently the value of the b ound provided by the LP

relaxation of the disaggregated formulation is equal to the value of the Lagrangean dual

obtained by dualizing the semiassignment constraints ie

X X X

min max p x x

ij ij j ij

imj n j n im

sub ject to

X

w x c j f ng

ij ij i

j n

x f g i fmgjf ng

ij

See for example Nemhauser and Wolsey Section I I for an exp osition of the

relation b etween Lagrangean relaxation and DantzigWolfe decomp osition The algorithms

of Fisher Jaikumar and Van Wassenhove Guignard and Rosenwein and

Karabakal Bean and Lohmann are based on b ounds obtained by solving the ab ove

Lagrangean dual

Our computational exp eriments will show that the branchandprice algorithm discussed

in this pap er although in theory using the same b ounds outp erforms the optimization

algorithms of Fisher Jaikumar and Van Wassenhove Guignard and Rosenwein

and Karabakal Bean and Lohmann A plausible explanation for this phenomenon

is the fact that the use of the simplex metho d provides much b etter convergence prop erties

than the use of subgradient and dual ascent metho ds for the solution of the Lagrangean

dual

Let y be any feasible solution to the LPrelaxation of the disaggregated formulation and

P

i i

let z x y then z constitutes a feasible solution to the LPrelaxation of the

ij

k k

jk k

i

standard formulation Furthermore wehave the following

i

Prop osition If y is fractional then there must b e a j such that z is fractional

ij

k

i

Pro of Supp ose there is no job j such that z is fractional Let F fk K j y g

ij i

k

b e the set of fractional variables asso ciated with agent iWemay assume that jF j

P

i i i

Note is fractional for every j with x y x b ecause if F fpg then z

ij

k k

jp

k jk

i

P

i

y Therefore that the convexity constraint asso ciated with agent i implies that

k F

k

P P P

i i i i i

is either or for y forj n Consequently x y y x

k F k F k F

k jk k k jk

P P

i i i i i i

j nIf y then x forall k F if x y then x x

k F k F

jk k jk jk k jk

for all k F But that means that wehave duplicate columns a contradiction

Branchandprice algorithms

Column generation is a pricing scheme for solving largescale linear programs LPs Instead

of pricing out nonbasic variables byenumeration in a column generation approach the most

negative or p ositive reduced price is found by solving an optimization problem Gilmore

and Gomory intro duced the column generation approach in the context of cutting

sto ck problems In their case as in many other cases the linear program is a relaxation of

an integer program IP However when an LP relaxation is solved by column generation

the solution is not necessarily integral and it is not clear how to obtain an optimal or

even feasible integer solution to the IP since standard branchandb ound techniques can

interfere with the column generation algorithm Recentlyvarious researchers have started

to develop customized branching strategies to handle these diculties eg Desro chers

Desrosiers and Solomon for vehicle routing problems Desro chers and Soumis

and Anbil Tanga and Johnson for crew scheduling problems and Vance Barnhart

Johnson and Nemhauser for cutting sto ck problems

Consider the linear programming relaxation of the disaggregated formulation for the

GAP This master problem cannot b e solved directly due to the exp onential number of

columns However a restricted master problem that considers only a subset of the columns

can b e solved directly using for instance the simplex metho d Additional columns for the

restricted master problem can b e generated as needed by solving the pricing problem

max fz KP v g

i i

im

where v is the optimal dual price from the solution to the restricted master problem as

i

so ciated with the convexity constraintofagent i and z KP isthevalue of the optimal

i

solution to the following knapsack problem

X

i

max p u x

ij j

j

j n

sub ject to

X

i

w x c

ij i

j

j n

i

x f g j f ng

j

with u b eing the optimal dual price from the solution to the restricted master problem

j

asso ciated with the partitioning constraintofjobj A column prices out favorably to enter

the basis if its reduced cost is p ositive Consequently if the ob jectivevalue of the column

generation subproblem is less than or equal to zero then the current optimal solution for

the restricted master problem is also optimal for the unrestricted master problem

i

However unless the y s are integer the solution to the master problem is not a solu

k

tion to the original IP In fact there maynoteven b e a feasible integer solution among the

columns present in the master problem However computational exp eriments have indi

cated that the value of the LP relaxation do es provide a very tight b ound on the value of

the optimal IP solution

Applying a standard branchandb ound pro cedure to the master problem with its exist

ing columns will not guarantee an optimal or feasible solution After branching it may

b e the case that there exists a feasible assignment that would price out favorably but this

assignment is not present in the master problem Therefore to nd an optimal solution

wemust generate columns after branching However supp ose that we use the conventional

i

branching rule based on variable dichotomywe branch on the fractional variable y and we

k

i

are in the branch in which y is xed to zero In the column generation phase it is p ossible

k

and quite likely that the optimal solution to the subproblem will b e the same assignment

i nd

represented by y In that case it b ecomes necessary to generate the column with the

k

highest reduced cost At depth n in the branchandb ound tree wemay need to nd the

th

column with n highest reduced cost

In order to prevent columns that have b een branched on from b eing regenerated wemust

cho ose a branching rule that is compatible with the pricing problem By compatible we

mean that wemust b e able to mo dify the subproblem so that columns that are infeasible due

to the branching constraints will not b e generated and the column generation subproblem

will remain tractable

Branching strategies

The challenge in formulating a branching strategy is to nd one that excludes the current

solution validly partitions the solution space of the problem and provides a pricing problem

that is still tractable

Wehave indicated in Section that any feasible solution to the disaggregated formula

tion has a corresp onding feasible solution to the standard formulation and that if a solution

to the disaggregated formulation is fractional then the corresp onding solution to the stan

dard formulation is also fractional

The idea now is to p erform branching using the standard formulation while working

with the disaggregated formulation Branching strategies for linear programs are based

on xing variables either a single variable variable dichotomy or a set of variables GUB

dichotomy Therefore for our idea to work wehave to show that xing a single variable

or xing a set of variables in the standard formulation has an equivalent in the disaggre

gated formulation and that the resulting branching scheme is compatible with the pricing

problem

In the standard formulation xing variable x to zero forbids job j to b e assigned

ij

to agent i and xing variable x to one requires job j to b e assigned to agent i In the

ij

disaggregated formulation this can b e accomplished as follows To forbid a job j to b e

assigned to agent i all variables for columns asso ciated with agent i that have a one in the

i i

row corresp onding to job j are xed to zero ie if x then y for all k K To

i

jk k

require a job j to b e assigned to agent i all variables for columns asso ciated with agent i

i

that do not have a one in the row corresp onding to job j are xed to zero ie if x

jk

i

then y for all k K and all variables for columns not asso ciated with agent i that

i

k

l l

then y have a one in the row corresp onding to job j are xed to zero ie if x

k jk

for l i m and k K

l

It is not hard to see that the resulting branching scheme is also compatible with the

pricing problem The pricing problem involves the solution of a knapsack problem for each

agent Forbidding the assignmentofjobj to agent i is accomplished by not considering job j

in the knapsack for agent i and requiring the assignmentofjobj to agent i is accomplished

by not considering job j in the knapsackforagent i and reducing the capacity of knapsack

i by the claim on its capacitybyjobj

Implementation issues

Initial restricted master problem

To start the column generation pro cedure an initial restricted master problem has to b e

provided This initial restricted master problem must have a feasible LP relaxation to

ensure that prop er dual information is passed to the pricing problem Wehavechosen to

start with one column for each agent corresp onding to the optimal knapsack solution and

a dummy column consisting of all ones with a large negative prot The dummy column

ensures that a feasible solution to the LP relaxation exists This dummy column will b e

kept at all no des of the branchandb ound tree for the same reason

Column generation subproblem

Any column with p ositive reduced cost is a candidate to enter the basis The pricing

problem dened ab ove nds the column with highest reduced cost Therefore if a column

with p ositive reduced cost exists the column generation will always identify a candidate

column This guarantees that the optimal solution to the linear program will b e found

However solving the column generation problem involves the solution of several knap

sack problems whichmay b e computationally prohibitive Fortunately for the column

generation scheme to work it is not necessary to always select the column with the highest

reduced cost any column with a p ositive reduced cost will do

Therefore various alternative column generation schemes can b e develop ed An obvious

alternative is to select the rst column encountered with a p ositive reduced cost To

prevent a bias towards a certain agent a random starting p oint can b e used This reduces

the computation time p er iteration However since the numb er of iterations may increase

it is not sure whether the overall eect is p ositive Another alternative is to select all

columns encountered with a p ositive reduced cost It is hard to estimate the eect on

the computation time Obviously it do es not aect the time required to solve the pricing

problem but it probably increases the time required to solve the restricted master and it

may increase or decrease the numb er of iterations

Yet another alternative is the use of approximation algorithms to solve the pricing

problem To guarantee that the optimal solution to the linear program will b e found a

twophase approach has to b e used A fast approximation algorithm is used to solvethe

pricing problem as long as it is able to identify a column with p ositive reduced cost In

case the approximation algorithm fails to identify a column with p ositive reduced cost an

optimization algorithm is invoked to prove optimality or generate a column with p ositive

reduced cost This pro cess is rep eated until the linear program is solved to optimalityWe

have not explored this alternative since the pricing problem that has to b e solved is a fairly

small knapsack problem and there exist very ecient optimization algorithms to solve such

problems

Primal heuristics

It is wellknown that the availability of go o d feasible solutions may reduce the size of the

branchandb ound tree considerably The approximation algorithm that has b een incorp o

rated in our branchandprice algorithm is a combination of the algorithms prop osed by

Martello and Toth and Jornsten and Nasb erg

Martello and Toth develop ed the following twophase approximation algorithm

for the GAPLet f b e a measure of the desirability of assigning job j to agent i Martello

ij

and Toth suggest four measures f p f p w f w and f w c

ij ij ij ij ij ij ij ij ij i

In the rst phase an attempt is made to construct an initial feasible solution Iteratively



consider all unassigned jobs and determine the job j having maximum dierence b etween



the largest and the second largest f for i m job j is then assigned to agentfor

ij



which f is maximum In the second phase if a feasible solution has b een found the

ij

solution is improved through lo cal exchanges

The approximation algorithm develop ed byJornsten and Nasb erg relies heavily on

lo cal exchanges An initial solution is constructed by assigning each job to its most protable

agent If this solution is feasible which rarely happ ens it is also optimal Otherwise some

of the capacity constraints are violated Next lo cal exchanges using some infeasibili ty

measure are applied to obtain a feasible solution Finally if a feasible solution has b een

found lo cal exchanges are applied again but nowtoimprove the quality of the solution

The approximation algorithm incorp orated in our branchandprice algorithm is basi

cally the Martello and Toth algorithm extended with lo cal exchange pro cedures to handle

the situation in which the attempt to construct an initial feasible solution failed It is in

P

i i

voked at every no de of the branchandb ound tree using f z ie f x y

ij ij ij

k k

jk k

i

as a measure of the desirability of assigning job j to agent i

Branching scheme

In Section wehave shown that branching strategies based on variable xing are compatible

with the pricing problem Wehave explored twosuchbranching strategies In the rst

we branch on the fractional variable x with fractional part closest to We set x

ij ij

on one branchandx on the other branch In case of ties we select the variable

ij

P

with highest prot p In the second we branch on the subsetsum constraint x

ij ij

im

P

that contains the most fractional variables We set x on one branch and



ij

ii

P



x on the other branch where wehavechosen i to b e as close as p ossible to



ij

i im



m note that there has to b e at least one fractional variable on b oth sides of i In case of

P P

ties we select the subsetsum constraint for which j x j j x j

 

ij ij

ii i im

is as small as p ossible

The ab ove branching strategies sp ecify how the current set of feasible solutions is to b e

divided into two smaller subsets They do not sp ecify how the subproblem to b e solved

next is to b e selected Wehave considered two selection strategies depthrst searchand

b estb ound search Depthrst search is usually applied to get hop efully go o d feasible

solutions fast exp erience shows that feasible solutions are more likely to b e found deep in

the tree than at no des near the ro ot Having a go o d feasible solution is necessary to b e

able to prune no des and thus to reduce the size of the branchandb ound tree Bestb ound

searchismotivated by the observation that the no de containing the b est b ound has to b e

considered to prove optimalitysoitmayaswell b e explored rst

Recall that in our branchandprice algorithm for the GAPweinvoke our primal heuris

tic at each no de of the branchandb ound tree and that the primal uses the current

LP solution to measure desirability of assignments Furthermore observe that a b estb ound

search strategy jumps around the branchandb ound tree much more than a depthrst

search strategy Consequently if a b estb ound search strategy is used the primal heuristic

is more likely to see dierent measures of desirability of assignments early on in the search

pro cess than when a depthrst search strategy is used This increases the chance that the

primal heuristic will identify a go o d feasible solution

Computation results

The branchandprice algorithm has b een implemented using MINTO a Mixed INTeger

Optimizer Nemhauser Savelsb ergh and Sigismondi MINTO is a software system

that solves mixedinteger linear programs by a branchandb ound algorithm with linear

programming relaxations It also provides automatic constraint classication prepro cess

ing primal heuristics and constraint generation Moreover the user can enrich the basic

algorithm byproviding a variety of sp ecialized application routines that can customize

MINTO to achieve maximum eciency for a problem class MINTO can either b e built

on top of the CPLEX callable library or on top of IBMs Optimization Subroutine Library

OSL Unless stated otherwise our computational exp eriments have b een conducted with

MINTO CPLEX and have b een run on an IBMRS mo del The algorithm

of Horowitz and Sahni has b een used to solve the knapsack problems

Wehave conducted four computational exp eriments to determine the eectiveness and

eciency of our branchandprice algorithm

Optimization algorithms for the GAP are generally tested on four classes of random

problems usually referred to as ABC and D generated according to the following rules

see for instance Guignard and Rosenwein

A p and w are integer from a uniform distribution b etween and and b etween

ij ij

P



and resp ectively c nm max  w where J fj ji

i im ij

j J i

i

argmin p g

rj

r n

B Same as A for p and w c of c in A

ij ij i i

P

C Same as A for p and w c w m

ij ij i ij

j n

D Same as C for c w is integer from uniform distribution b etween and p

ij ij

w k where k is integer from a uniform distribution b etween and

ij

The ab ovescheme generates instances for the minimization form of the GAP Since our

algorithm is designed for the maximization form of the GAP All instances are converted

to the maximization form by the following transformation Let t max p

imj n ij

We replace p by t p for all i fmg and j f ng

ij ij

In the rst two exp eriments wehaveconcentrated on identifying the b est choices for the

pricing scheme and the branching scheme For these exp eriments wehaveusedvarious sizes

m and n for all problem classes ABC and D All computational

results presented are based on randomly generated instances for the size and class under

consideration

In the rst exp eriment wehave concentrated on the inuence of the chosen algorithm

for the pricing problem For the set of instances wehave compared the allp ositive the

b estp ositive and the rstp ositive strategies The results can b e found in Table The

computational results show that all strategies have a comparable p erformance For the next

exp eriments wehavechosen to use the allp ositive strategy since this strategy required the

smallest amount of computation time over all instances

In the second exp eriment wehave concentrated on the inuence of the chosen branching

strategyFor the set of instances wehave compared the division schemes based on variable

dichotomy and on GUB dichotomy and the depthrst and b estb ound selection schemes

The results can b e found in Tables and The computational results show that the

b estb ound selection scheme clearly outp erforms the depthrst selection scheme and that

the division scheme based on GUB dichotomy and the division scheme based on variable

dichotomyhave a comparable p erformance For the next exp eriments wehavechosen to

use the b estb ound selection scheme and the division scheme based on GUB dichotomy

since this strategy required the smallest amount of computation time over all instances

In the nal two exp eriments wehave concentrated on the overall p erformance of

our branchandprice algorithm For these exp eriments wehave used various sizes m

and n for all problem classes ABC and D All computational results

presented are based on randomly generated instances for the size and class under con

sideration Note that many instances in this test set are larger than those that have b een

used in computational exp eriments rep orted in earlier pap ers

In the third exp eriment wehave concentrated on the quality of the b ounds For the

test instances wehave compared the value of the linear programming relaxation of the

standard formulation LP the value of the linear programming relaxation of the standard

Table Comparison of the pricing strategies

Instance allp ositive b estp ositive rstp ositive

no des cpu no des cpu no des cpu

avg max avg max avg max avg max avg max avg max

A

A

A

A

A

A

B

B

B

B

B

B

C

C

C

C

C

C

D

D

D

D

D

D

formulation plus lifted knapsackcovers LP and the value of the disaggregated formula

tion LP Wehave included the second b ound b ecause knapsackcovers haveproven to b e

quite eective in improving the quality of the b ounds for general integer programs see

for instance Crowder Johnson and Padb erg and Gu Nemhauser Savelsb ergh

The lifted knapsackcovers are automatically generated by MINTO For a description of the

sp ecic algorithms emb edded in MINTO we refer the reader to Gu Nemhauser Savelsb ergh

The results can b e found in Tables and The computational results in Table

show that the linear programming b ound of the disaggregated formulation is quite go o d

Even for the instances in the most dicult problem class D the average integralitygap

is less than p ercent A closer examination also reveals that the dierence in quality

n

between LP and LP is larger for instances with a small ratio This phenomenon can

m

n

b e explained as follows The ratio represents the average numb er of jobs assigned to

m

agents If the average numb er of jobs assigned to agents is small then the LP relaxations

of the knapsack problems asso ciated with the agents are typically weak Consequently

Table Comparison of branching division schemes for b estb ound selection

Instance GUBdichotomy variabledichotomy

no des cpu no des cpu

avg max avg max avg max avg max

A

A

A

A

A

A

B

B

B

B

B

B

C

C

C

C

C

C

D

D

D

D

D

D

solving these knapsacks to optimality as done in the disaggregated formulation will

lead to substantially stronger b ounds If the average numb er of jobs assigned to agents

is large then the LP relaxations of the knapsack problems asso ciated with the agents

are typically strong and solving them to optimality will not lead to substantially stronger

b ounds

The computational results in Table show that the increased quality of the b ounds

comes at a price The computation times have increased as well esp ecially on problem

n

classes with a high ratio This phenomenon can b e explained as follows If the average

m

numb er of jobs assigned to agents is small then the knapsack problems asso ciated with

the agents typically have a small numb er of feasible solutions which isafavorable situation

for a column generation approach If the average numberofjobsassignedtoagents is large

then the knapsack problems asso ciated with the agents typically havealargenumber

feasible solutions many of whichmayhave comparable ob jective function values whichis

notafavorable situation for column generation approaches

The arguments presented ab ove indicate that we can exp ect our branchandprice algo

Table Comparison of division schemes for depthrst selection

Instance GUBdichotomy variabledichotomy

no des cpu no des cpu

avg max avg max avg max avg max

A

A

A

A

A

A

B

B

B

B

B

B

C

C

C

C

C

C

D

D

D

D

D

D

n

rithm to do well esp ecially on problem instances with a relatively small ratio of

m

In the nal exp eriment wehave compared our branchandprice algorithm with the

algorithm of Karabakal Bean and Lohmann Their branchandb ound algorithm

solves the Lagrangean dual discussed in Section to obtain upp er b ounds and extends

earlier work of Fisher Jaikumar and Van Wassenhove and Guignard and Rosenwein

The results can b e found in Tables and The computational results show

that as anticipated our branchandprice algorithm p erforms b etter on problem classes

n n

with a relatively small ratio ie whereas the Karabakal Bean and Lohmann

m m

n n

ie As such algorithm p erforms b etter on problem classes with a large ratio

m m

the branchandprice algorithm and the Lagrangian dual algorithm complementeach other

nicely

Approximation algorithms

The quality of the linear programming b ound asso ciated with the disaggregated formulation

and the fact that go o d feasible solutions are usually found early on in the solution pro cess

suggest that truncated tree search algorithms may provide very go o d approximation algo

rithms In truncated tree search algorithms the number of nodes evaluated in the solution

pro cess is reduced according to some presp ecied scheme Truncated tree search algorithms

present a tradeo b etween eectiveness and eciency

Wehave considered two dierentschemes to reduce the numberofevaluated no des in

the solution pro cess

In the rst scheme no more than a presp ecied xed numb er no des will b e evaluated

The advantage of this scheme is the fact that it guarantees that the time required to pro duce

a solution is fairly predictable The disadvantage is that it is imp ossible to sayanything

b eforehand ab out the quality of the solution pro duced by the algorithm it is not even

guaranteed that a solution will b e found

In the second scheme a no de is fathomed if z z wherez is the value

LP IP LP

of the linear programming solution at the no de z the value of the b est known integer

IP

programming solution and an optimality tolerance The advantage of this scheme

is the fact that it guarantees that the value of the solution pro duced by the algorithm is

within p ercent of the optimal value The disadvantage is that it is imp ossible to say

anything b eforehand on the time required to pro duce a solution

Very few approximation algorithms exist for the GAP Most of them consist of two

phases a construction phase in which an initial feasible solution is constructed and an

improvement phase in which the initial feasible solution is improved To the b est of our

knowledge the linear relaxation heuristic LRH prop osed byTrick is one of the b est

among these heuristics in terms of quality of solution as well as solution times

In an indep endent study Cattrysse Salomon and Van Wassenhovehave used

the set partitioning formulation underlying the algorithms develop ed and discussed in this

pap er to develop an approximation algorithm for the GAP Their algorithm consists of two

phases In the rst phase the linear programming relaxation of the master problem is

solved approximately In the pro cess a set of columns ie feasible assignments of jobs to

machines is obtained In the second phase an enumeration scheme develop ed by Garnkel

and Nemhauser is used to identify a feasible solution to the GAP among the columns

generated in the rst phase They have tested their approximation algorithm on instances

of various sizes from class C

Wehave compared the p erformance of three truncated tree search algorithms with

and to the p erformance of the linear relaxation heuristic on

ten randomly generated instances in the problem classes D and D and on

two sets of ten even larger randomly generated instances in the problem classes D

and D The results can b e found in Tables and These computational

exp eriments have b een conducted with MINTO CPLEX and have b een run on an

IBMRS mo del The computational results show that the truncated tree search

algorithms clearly outp erform the linear relaxation heuristic in terms of solution quality with

an acceptable increase in computation time for and and a considerable

increase in computation time for All in all the truncated tree search algorithms

provide a go o d balance b etween eectiveness and eciency

Acknowledgment

Wewould like to thank James Bean and MikeTrick for making their co de available to

us Furthermore wewould like to thank an anonymous referee for his comments and

suggestions whichhave help ed improve the quality and readability of the pap er

References

R Anbil R Tanga EL Johnson A ApproachtoCrew

Scheduling Rep ort COC Georgia Institute of TechnologyAtlanta

DG Cattrysse LN van Wassenhove A survey of algorithms for the gener

alized assignment problem European J Oper Res

DG Cattrysse M Salomon LN van Wassenhove A set partitioning

heuristic for the generalized assignment problem European J Oper Res

M Desrochers J Desrosiers M Solomon A new optimization algorithm

for the with time windows Oper Res

M Desrochers F Soumis A column generation approach to the urban transit

crew scheduling problem Transportation Science

ML Fisher R Jaikumar LN van Wassenhove A multiplier adjustment

metho d for the generalized assignment problem Management Science

RS Garfinkel GL Nemhauser Set partitioning problem set covering with

equality constraints Oper Res

PC Gilmore RE Gomory A linear programming approach to the cutting

sto ck problem Oper Res

Z Gu GL Nemhauser MWP Savelsbergh Cover Inequalities for Inte

ger Programs I Computation Rep ort LEC Georgia Institute of TechnologyAtlanta

M Guignard M Rosenwein An improved dualbased algorithm for the gener

alized assignment problem Oper Res

E Horowitz S Sahni Computing partitions with applications to the knapsack

problem Journal of the ACM

K Jornsten M Nasberg A new Lagrangian relaxation approach to the gener

alized assignment problem European J Oper Res

N Karabakal JC Bean JR Lohmann A Steepest Descent Multiplier Ad

justment Method for the Generalized Assignment Problem Rep ort Universityof

Michigan Ann Arb or

S Martello P Toth An algorithm for the generalized assignment problem

JP Brans ed NorthHolland Amsterdam

GL Nemhauser LA Wolsey Integer Programming and Combinatorial Opti

mization Wiley Chichester

GL Nemhauser MWP Savelsbergh GC Sigismondi MINTO a Mixed

INTeger Optimizer Oper Res Letters

GT Ross RM Soland A branchandb ound algorithm for the generalized as

signment problem Math Prog

MA Trick A linear relaxation heuristic for the generalized assignment problem

Naval Research Logistics

P Vance C Barnhart EL Johnson GL Nemhauser Solving Binary Cut

ting Stock Problems by Column Generation and BranchandBound COC Georgia

Institute of Technology Atlanta

Table Quality of the b ounds I integrality gap

Instance nm LP LP LP

  

avg max avg max avg max

A

A

A

A

A

A

A

A

B

B

B

B

B

B

B

B

C

C

C

C

C

C

C

C

D

D

D

D

D

D

D

D

Table Quality of the b ounds I I computation time

Instance nm LP LP LP

  

avg max avg max avg max

A

A

A

A

A

A

A

A

B

B

B

B

B

B

B

B

C

C

C

C

C

C

C

C

D

D

D

D

D

D

D

D

Table Results for the Karabakal Bean and Lohmann algorithm

Instance nm no des cpu

avg max avg max

A

A

A

A

A

A

A

A

B

B

B

B

B

B

B

B

C

C

C

C

C

C

C

C

D

D

D

D

D

D

D

D

Based on instances solved within no des

Based on instances solved within no des

Table Results for the Savelsb ergh algorithm

Instance nm no des cpu

avg max avg max

A

A

A

A

A

A

A

A

B

B

B

B

B

B

B

B

C

C

C

C

C

C

C

C

D

D

D

D

D

D

D

D

Table Performance of truncated tree search algorithms D

LRH TTS m TTS TTS

value cpu value cpu value cpu value cpu opt

Table Performance of truncated tree search algorithms D

LRH TTS m TTS TTS

value cpu value cpu value cpu value cpu opt

Table Performance of truncated tree search algorithms D

LRH TTS m TTS TTS

value cpu value cpu value cpu value cpu ub

Table Performance of truncated tree search algorithms D

LRH TTS m TTS TTS

value cpu value cpu value cpu value cpu ub