<<

Using an interior p oint metho d in a branch and b ound for

July

Brian Borchers

Department of Mathematics

Weir Hall

New Mexico Tech

So corro NM

borchersjupiternmtedu

John E Mitchell

Department of Mathematical Sciences

Rensselaer Polytechnic Institute

TroyNY

mitchjrpiedu

Sub ject Classication Programming integer branchandb ound

Abstract

This pap er describ es an exp erimental co de that has b een develop ed

to solve zeroone mixed integer linear programs The exp erimental co de

uses a primaldual interior p oint metho d to solve the

subproblems that arise in the solution of mixed integer linear programs

by the branch and b ound metho d Computational results for a number

of test problems are provided

Intro duction

Mixed integer linear programming problems are often solved by branchand

b ound metho ds Branch and b ound co des such as the ones describ ed in

normally use the algorithm to solve linear programming sub

problems that arise In this pap er we describ e an exp erimental branchand

b ound co de for zeroone mixed integer linear programming problems that uses

an interior p oint metho d to solve the LP subproblems

This pro ject was motivated by the observation that interior p oint metho ds

tend to quickly nd feasible solutions with go o d ob jectivevalues but takea

relatively long time to converge to an accurate solution For example gure

shows the sequence of dual ob jectivevalues generated during the solution of a

sample LP by the primaldual metho d After ve iterations the solution is dual

feasible and a lower b ound is available After ten iterations the dual solution

has an ob jectivevalue within of the optimal ob jectivevalue However

it takes eighteen iterations to solve the problem to optimalityFurthermore

the solution estimates generated byaninterior p oint metho d tend to converge

steadily to an optimal solution Figure shows the sequence of values of the

variable x for the sample LP After ab out ten iterations it b ecomes apparent



that x is converging to a value near



Within a branch and b ound algorithm accurate solutions to the LP sub

problems are often not needed Any dual solution with a higher ob jectivevalue

than a known integer solution provides a b ound sucient to fathom the current 1e+07

8e+06

6e+06

4e+06 Objective Value

2e+06

0 0 5 10 15 20

Iterations

Figure Lower b ounds for a sample problem 1

0.8

0.6

0.4 Estimates of X(18)

0.2

0 0 5 10 15 20

Iterations

Figure Estimates of x



subproblem Furthermore if it b ecomes apparent that the optimal solution to

the current subproblem includes an integer variable at a fractional value we

can branch early without solving the current subproblem to optimalityAswe

shall see later this early branching signicantly improves the p erformance of

our branch and b ound co de

However the simplex metho d has a signicantadvantage over interior p oint

metho ds within a branch and b ound algorithm In the branch and b ound algo

rithm wecanwarm start the solution of each new subproblem with the optimal

solution from the parent subproblem Using the simplex metho d a new optimal

solution can usually b e found after only a few simplex iterations With interior

p oint metho ds warm starts are not as eective

In order to determine whether the advantages of using an interior p oint

metho d outweigh the advantages of using the simplex metho d wehavedevel

op ed an exp erimental co de that uses the primaldual interior p oint metho d and

early branching Wehave tested this co de on a numb er of sample problems and

compared the p erformance of the co de to the simplex based branchandbound

co de in IBMs Optimization Subroutine Library OSL These computational

results are the rst contribution of the pap er The second contribution of the

pap er is the early branching idea whichmight b e applicable to other branch

and b ound algorithms

The pap er is organized as follows In section we describ e the interior

p oint metho d that wehave used to solve the subproblems We describ e the

branch and b ound algorithm in section We present computational results for

anumb er of sample problems in section Our conclusions are presented in

section

Solving the LP Subproblems

The exp erimental co de solves problems of the form

T

P min c x

sub ject to Ax b

x

x u

Some of the x variables are restricted to the values zero and one Here A is

an m by n matrix c is an n byvector and b is a m byvector Some of

the primal variables maybeunb ounded For these variables wewriteu

i

Weintro duce slackvariables s so that the constraint x u can b e rewritten as

x s u together with the nonnegativity constraint s

The LP dual of this problem can b e written as

T T

w D max b y u

T

sub ject to A y w z c

w z

If x is an unb ounded primal variable then wexw at zero and let s

i i i

Within the branch and b ound algorithm we will solvea number of LP

relaxations of the original problem P using the primaldual metho d describ ed

in The primaldual metho d is summarized in the following algorithm

Algorithm

Given an initial primal solution x s with x s and x s u and an

initial dual solution w y z withw z repeat the fol lowing iteration until

the convergence criterion has been met

Compute

Compute X Z S W and S X e W Z e

Compute the Newtons methodstepsx y s w and z using

T T

AA y b AxAc A y z w

T T

x A y c A y z w

s x

z X XZe e Z x

w S SW e e W s

x s s Find step sizes and that ensure x

p p d p

z zand w w Ifpossible make ful l steps with

d d p

and

d

Let x x x s s s y y y w w w and

p p d d

z z z

d

For an initial solution we use the metho d describ ed in In step we

compute using the metho d of We consider the current solution feasible if

kb Axk



kxk

and

T

kc A y w z k



ky k kw k kz k

We terminate the primaldual algorithm and declare the current solution opti

mal when the current solution is primal and dual feasible and when

T T T

jc x b y u w j



T T

jb y u w j

This ensures that the solution is primal and dual feasible and that the duality

gap is small relative to the ob jectivevalue

The most dicult part of the computation is calculating y in the primal

dual step As long as the constraintmatrixA is sparse and has no dense columns

 T

we can hop e that the matrix AD A will also b e sparse The exp erimental

co des takeadvantage of this sparsitybysaving the matrix in sparse form and

making use of routines from the Yale Package YSMP or IBMs

Extended Scientic Subroutine Library ESSL to solve the systems of equations

Wehave found that the routines from YSMP are more eectiveonvery

sparse problems while the routines from ESSL work b est on relatively dense

problems

If the linear programming problem is primal or dual infeasible then the

algorithm will lo op without ever nding a feasible solution However in our

branch and b ound algorithm it is reasonable to assume that the initial LP

relaxation of the problem will b e primal and dual feasible As a result all of the

other subproblems in the branch and b ound tree will b e at least dual feasible

and we need only detect LP subproblems which are dual unb ounded and primal

infeasible

In theorywewould know that a dual subproblem was unb ounded if the

T T

current dual solution was dual feasible w z and b y u w

However b ecause of numerical problems in the calculation of w and z this

do es not work well in practice Wehavedevelop ed an alternative pro cedure for

detecting infeasible subproblems

After each primaldual iteration we compute directions w and z by







w minw z if u

i i i i

w

i







otherwise

and







z minw z if u

i i i i

z

i







z otherwise

i

T T

If the previous dual solution was feasible and if b y u w then y w

and z give us a direction in which the dual solution is feasible and improving

If w and z are nonnegative then we can moveasfaraswe wish in this

improving direction and the dual problem is unb ounded

Within the branch and b ound algorithm we will need to restart the primal

dual metho d after xing a zeroone variable at zero or one We could simply

use the last primal and dual solutions to the parent subproblem but very small

values of x s w or z can lead to numerical problems Instead we use a warm

i i i i

start pro cedure similar to the one describ ed in If x or s islessthana

i i

tolerance then we set the variable to and adjust the slack as needed If w

i

or z is less than and x has an upp er b ound then we add to b oth w and

i i i

z This helps to retain dual feasibilityIfx has no upp er b ound and z is less

i i i

than then weadd to z Our co de uses

i

The Algorithm

Aowchart of our branch and b ound algorithm app ears in gure The algo

rithm maintains one large data structure This is the tree of subproblems Each

leaf no de in the tree is a record containing a description of whichvariables have

b een xed and the b est known primal and dual solutions for that subproblem

b ers while the primal solution consists The dual solution consists of m n num

of n numbers We store only x y and w Thevalues of s and z can b e calcu

lated when they are needed In contrast a simplex based branch and b ound

co de would normally save only a list of the variables that had b een xed at zero

or one and a listing of the variables in the current basis Thus our algorithm

uses somewhat more storage than a simplex based co de

In addition to the problem data and the branch and b ound tree the algo

rithm maintains twovariables The upp er b ound ub is the ob jectivevalue for

Initialize BB

Start 

Tree



Tree Empty Pick a New



Subproblem



PrimalDual



Step











Infeasible or Yes Delete

  

Output Soln

lb ub Subproblem









No 

End 







Yes

 

Branch Fractional 



Var







No









Yes Integer

ubminubob j  

Soln









No

Figure The branch and b ound algorithm

the b est known integer solution It is initialized to For each subproblem

we obtain lower b ounds lbwhich can b e used to fathom a subproblem when

lb ub

In step our algorithm uses a depth rst search strategy to pick subproblems

until an integer solution has b een found From then on it picks the remaining

subproblem with the lowest estimated ob jectivevalue These estimates are

based on pseudo costs similar to those describ ed in

In step we use a heuristic to determine if any of the zeroone variables

app ear to b e converging to fractional variables If the heuristic determines

that one of the zeroone variables is approaching a fractional value then the

algorithm branches on the current subproblem to create two new subproblems

The heuristic could b e wrong in a number of ways the current subproblem could

haveaninteger optimal solution the current subproblem could b e infeasible

or the optimal value of the current subproblem could b e higher than ub which

would allow us to fathom the current subproblem In each of these cases the

algorithm would have to consider more subproblems than if it had not branched

earlyHowever the algorithm will eventually recover from such a mistake since

the zeroone variables will eventually all b e xed at zero or one and at that

er b ound pointeach subproblem will b e solved to optimality fathomed bylow

or shown to b e infeasible

Our heuristic is based on the work of ElBakryTapia and Zhang who

k k

x s

i i

have shown that for a fractional variable x the ratios and go to

i

k k

x s

i i

k

w

i

one as the solution approaches optimalityFurthermore the ratios and

k

w

i

k

z

i

go to zero as the solution approaches optimality Although the version of

k

z

i

the primaldual metho d used by ElBakryTapia and Zhang is slightly dierent

from the version that we use these ratios still provide a go o d heuristic indication

of whether or not a variable is tending to a fractional value

Our version of the heuristic is as follows First we do not search for fractional

variables until wehave a solution which is dual feasible and within of

primal feasibility Furthermore we do not search for fractional variables if

it app ears that the optimal value of the current subproblem mightbelarge

enough to fathom the subproblem by b ound We base this decision on the

T k  T k  T k  T k 

last two dual ob jectivevalues If b y u w b y u w

T k T k

b y u w ub then wedelay searching for fractional variables in hop es

ts the of fathoming the current subproblem by b ound Although this preven

algorithm from branching as early as p ossible this safeguard acts to reduce the

total numb er of subproblems solved

Once the ab ove conditions are met we declare a zeroone variable x frac

i

tional when it satises

k

x

i

j j

k

x

i

k

s

i

j j

k

s

i

k

w

i

k

w

i

k

s

i

k

s

i

In this heuristic we put more stress on the ratios of x and s b ecause wehave

found that the ratios of the w and z variables do not go to zero as quickly as

the ratios of the x and s variables go to one

Computational Results

We tested our exp erimental co des on seven sets of sample problems The rst

three sets of sample problems consist of randomly generated set covering prob

lems The fourth set of sample problems is a set of capacitated facility lo cation

problems taken from the literature Problem set ve consists of two problems

given to us byATT Bell Lab oratories Problem set six consists of the problem

khb from the MIPLIB collection of test problems Problem set seven

consists of the problem misc from the MIPLIB collection of test problems

Table I gives statistics on size of these test problems For each problem

wegivethenumber of rows columns and zeroone variables We also give

T

statistics on the density of the constraint matrix A the densityofAA and

T

the Cholesky factors of AA after minimum degree ordering

All computations were p erformed in double precision on an IBM ES

under the AIX op erating system The exp erimental co des were written

in Fortran and compiled with the VS Fortran version compiler and VAST

prepro cessor CPU Times were measured with the CPUTIME subroutine

of the Fortran runtime library These times exclude the time required to read

in the problems and to output solutions For comparison we also solved the

sample problems using routines from release of IBMs Optimization Subroutine

Library OSL

The optimal ob jective function values and solutions from OSL were given

to seven signicant digits The exp erimental co des were designed to nd an

optimal ob jectivevalue go o d to ab out eight signicant digits In all cases the

exp erimental co des pro duced the same integer solutions as OSL The optimal

ob jectivevalues generally matched to seven signicant digits although there

are slight dierences in the seventh digit in some cases

For each set of problems we rst solved the LP relaxations of the problems

using the OSL simplex routine EKKSSLV the OSL primaldual routine EKKB

SLV and our implementation of the primaldual metho d called SOLVELP Fig

ure shows howthetwo implementations of the primaldual metho d p erformed

in comparison to OSLs simplex metho d For the problems in problem sets one

two and three the primaldual co des were comp etitive with EKKSSLV while

on problems sets four ve six and seven the EKKSSLV routine was faster The

Problem Set rows columns variables density

T

A AA L

Table I Sample problem statistics SOLVELP CPU Time, Relative to EKKSSLV EKKBSLV

7

6

5

4

3

2

1

Set 1 Set 2 Set 3 Set 4 Set 5 Set 6 Set 7

Figure LP solution times relative to OSL simplex

OSL implementation of the primaldual metho d is generally faster than our ex

p erimental implementation of the primaldual metho d This can b e explained

by more sophisticated sparse matrix linear algebra routines and by OSLs use

of the predictorcorrector scheme

In order to determine the eectiveness of the early branching heuristics

twoversions of the exp erimental co de were develop ed The rst version of

the co de called BB uses early branching as describ ed in the previous section

The second version of the co de called FULLBB solves each subproblem to

optimality without any early branching Thus we can determine the eectiveness

of the early branching heuristics

We then solved the sample problems with the two exp erimental co des and

with the OSL subroutine EKKMSLV EKKMSLV is a sophisticated branchand

b ound co de that uses the dual simplex metho d to solve the linear programming

subproblems Figure shows the numb er of subproblems solved bythetwo

versions of the exp erimental co de relativetothenumb er of subproblems solved

by EKKMSLV In general the two exp erimental co des solved ab out as many

subproblems as OSL Furthermore the version of the co de with early branching

did not solve more subproblems than the version of the co de without early

branching This indicates that the use of early branching did not signicantly

increase the numb er of subproblems solved

Problem seven is unusual Although this problem had over a hundred zero

one variables than any of the other test problems EKKMSLVwas able to solve

the problem with only LP subproblems The BB co de solved subprob

lems while the FULLBB co de solved subproblems For this problem it

app ears that our pro cedure for selecting subproblems to b e solved was not as

eective as OSLs

Early branching was able to reduce the numb er of iterations p er subproblem

used by the exp erimental co de Figure shows the numb er of iterations p er

subproblem for the two exp erimental co des In each case the BB co de with Subproblems Solved, BB Relative to EKKMSLV FULLBB

4

3

2

1

Set 1 Set 2 Set 3 Set 4 Set 5 Set 6 Set 7

Figure Subproblems solved relative to EKKMSLV BB

Iterations per Subproblem FULLBB

15

10

5

Set 1 Set 2 Set 3 Set 4 Set 5 Set 6 Set 7

Figure Iterations p er Subproblem BB

CPU Time FULLBB Relative to EKKMSLV

30

20

10

1

Set 1 Set 2 Set 3 Set 4 Set 5 Set 6 Set 7

Figure CPU times relative to EKKMSLV

early branching used fewer iterations p er subproblem In most cases the BB

co de used signicantly fewer iterations p er subproblem than FULLBB

Figure shows the total CPU time used by the exp erimental co des relative

to the CPU time used by EKKMSLV On problem sets one two and three the

exp erimental co de BB is reasonably comp etitive with EKKMSLV On problem

sets four ve six and seven OSL dominates However OSLs simplex metho d

was signcantly faster than our primaldual co des in solving the LP relaxations

of these problems and it is not surprising that the simplex based branchand

b ound co de is sup erior

Conclusions

The exp erimental co de was not comp etitive with OSLs simplex based branch

and b ound pro cedure on most of the test problems Since the exp erimental

co de and EKKMSLV generally solved ab out the same numb er of subproblems it

app ears that the advantage of warm starting the simplex metho d in EKKMSLV

outweighed the advantages of early branching in the exp erimental co de

However there are a number of ways in which the exp erimental co de could

b e improved First the implementation of the primaldual metho d could b e

sp eeded up In some cases our primaldual LP solver to ok twice as long to

solve the LP relaxation of a problem as OSLs implementation of the primal

dual metho d Second a b etter pro cedure for warm starting the primaldual

metho d could b e develop ed This is an area in whichvery little work has b een

done to date Third improvements in the heuristics for detecting fractional

variables might b e p ossible Fourth the exp erimental co de is lacking in supp ort

for sp ecial ordered sets implicit enumeration and other techniques that can

sp eed up the branch and b ound algorithm

Furthermore our computational testing has b een limited to relatively small

problems The largest test problem that wehave solved has fewer than

rows Since interior p oint metho ds for linear programming are generally more

eective on larger problems it is p ossible that a branch and b ound co de using

an interior p oint metho d would b e sup erior to a conventional branch and b ound

co de on larger problems

Although the computational results in this pap er are not immediately en

couraging we feel that progress in interior p oint metho ds for linear program

ming or the need to solve larger mixed integer programming problems will make

further researchworth while

The second contribution of this pap er was the early branching technique

In most cases the co de with early branching solved ab out the same number

of subproblems as the co de without early branching At the same time the

co de with early branching used signicantly fewer iterations p er subproblem

As a result the exp erimental co de with early branching was signicantly faster

than the co de without early branching The early branching idea might also b e

applicable to branch and b ound algorithms for other problems such as mixed

integer problems

Acknowledgements

This researchwas partially supp orted by ONR Grantnumb er NJ

References

Umit Akinc and Basheer M Khumawala An Ecient Branch and

Bound Algorithm for the Capacitated Warehouse Location Problem Man

agement Science

Rob ert E Bixby E Andrew Boyd and Ronni R Indovina MIPLIB

ATest Set of MixedInteger Programming Problems SIAM News

In Chan Choi Clyde L Monma and David F Shanno Further

Developments of a PrimalDual Interior Point Method ORSA Journal

on Computing

S C Eisenstat MC Gurshy MH Schultz and AH Sherman

The Yale Sparse Matrix Package I The Symmetric Codes International

Journal for Numerical Metho ds in Engineering

A S ElBakryRATapia and Y Zhang A Study of Indicators

for Identifying ZeroVariables in InteriorPoint MethodsTechnical Rep ort

TR Rice University

JJHForrest and J A Tomlin Implementing Interior Point Lin

ear Programming Methods in the Optimization Subroutine Library IBM

Systems Journal

A M Georion and R E Marsten Integer Programming Algorithms

AFramework and StateoftheArt Survey Management Science

IBM Engineering and Scientic Subroutine Subroutine Library Guide

and Reference Kingston NY

IBM VAST for VS Fortran Users Guide Kingston NY

IBM VS Fortran Version Language and Library Reference Release

Kingston NY

IBM IBM Optimization Subroutine Library Guide and Reference

Kingston NY third edition

A Land and S Powell Computer Codes for Problems of Integer

ProgrammingInP L Hammer E J Johnson and B H Korte editors

Discrete Optimization I I Annals of Discrete Mathematics North Hol

land New York

Irvin J Lustig Roy E Marsten and David F Shanno Starting and

Restarting the PrimalDual Interior Point MethodTechnical Rep ort SOR

Princeton University

Irvin J Lustig Roy E Marsten and David F Shanno Computa

tional Experience With a PrimalDual Interior Point Method for Linear

amming Linear Algebra and Its Applications Progr

Kevin A McShane Clyde L Monma and David Shanno An Im

plementation of a PrimalDual Interior Method for linear Programming

ORSA Journal on Computing

R Gary Parker and Ronald L Rardin Discrete OptimizationAca

demic Press Boston