
Computers Math. Applic. Vol. 20, No. 3, pp. 1-14, 1990 0097-4943/90 $3.00 + 0.00 Printed in Great Britain. All fights reserved Copyright © 1990 Pergamon Press plc THE MODIFIED BARRIER FUNCTION METHOD FOR LINEAR PROGRAMMING AND ITS EXTENSIONS# V. PAN Computer Science Department, SUNY, Albany, NY 12222 and Mathematics and Computer Science Department, Lehman College, CUNY, Bronx, NY 10468, U.S.A. (Received 17 July 1989) Al~traet--We applied the modified barrier function (MBF) method and its new extensions to the linear programming (LP) problem and compared with the known LP barrier method. We then extended to the MBF LP method the worst ease estimate O(Lw/--m) for the computational cost of the solution of an LP problem (having input size L and the number of constraints and variables m) in terms of the number of Newton's steps (such an estimate is well-known for the LP barrier methods), and we showed the improved numerical stability of Newton's steps compared with the LP barrier methods. 1. INTRODUCTION We study the application of the modified barrier function (MBF) method, as well as of its further extensions, to the linear programming (LP) programs and compare the results with the application of the known LP barrier method [1, 2]. The MBF method was presented in Ref. [3] for the constrained optimization problem: x* = argrnin{fo(x)lf(x ) >/0, i = 1..... m}, (1) where x is an n-dimensional vector, and f0(x) and all the f(x) are fixed functions. This method is an improvement on its predecessor, the barrier function method [4], which reduced problem (1) to the unconstrained optimization problem, x*= argrnin{F(x)}. Here either F(x) =f0(x) - 2-' ~ lnf(x), (2) i (Frisch [51) or F(x) =f0(x) - ~ (2f(x))-t, (2a) i (Carrol [6]) or F(x) =f0(x) - 2-1~ 1/~(f(x)), (2b) i where ~(y) is a monotone concave function, ~(0) = 0, ~'(0) = 1, ~"(0) < 0. The barrier function method has been thoroughly studied, and its power was clearly demon- strated in Ref. [4], but the major difficulty of this approach stems from the fact that F(x) grows to the infinity as x--.x*. This difficulty, however, can be avoided by using the MBFs, F(u, x, 4) = f0(x) - 2 -~ ui/~(2f(x) + 1), (3) i F(u, x, 2) = fo(x ) - 2 -l ~ ui In (2f(x ) + 1), (3a) i F(u, x, 2) =f0(x) - 2 -1 ~ u#(gf(x) + 1). (3b) i tSome results of this paper have been presented at the Third SlAM Conference on Optimization, April 1989, Boston, Mass., U.S.A. 2 V. PAN Here u~ and ;t are positive scalar parameters, u = [ui] is the vector of the Lagrange multipliers. Suppose that the value ~. is fixed, the functions f0(x) and -f~(x) are convex, and some customary regularity conditions hold. Then there exists the vector u = u* [which is actually a saddle point of F(u, x, 4) and thus the optimum solution to the associated dual problem] such that x* ffi argmin{F(u*, x, 4)}, (4) and (u*, x*, ;.) is a saddle point of F(u, x, 4). Moreover, for F(u, x, ,~) defined by equation (3a), for any u sufficiently close to u* and for any sufficiently large ;t, there exists the vector = ~(u, 4) = argmin{F(u, x, ,~)}, (5) and max{ IIx -- x'H2, IIt~ - .*11=} -< x-'llu - u'h, (6) where u = [u, ..... urn]T, t~ = (,~f~(~) + 1)-tu, for all i, a is a scalar and lie is the Euclidean norm of a vector v. These results (due to Polyak and extended in Theorem 3.1 below) enable us to approximate to u* by computing ~ = Yc(u, A) and f~ = f4(u, ~, 4) for a fixed u and for a sufficiently large 4. The problem now is how to compute g given u and 4. In the LP barrier method, a similar problem arises for the Frisch function (2). The problem can be easily solved for smaller ). = ~0, and then the solution can be recursively extended (by using Newton's iteration) from ~ = )~ to ). = ~.¢+ l ffi (l + h)A~, h = I/(40x/~ ), m being the maximum of the numbers of constraints and variables (see Refs [l, 2]). In this paper, we show that at least the same speed of the increase of A can be ensured by using Newton's iteration for the MBF LP method. Furthermore, we show that the MBF method avoids the numerical stability problems even where the LP barrier methods have them. Technically, we prove a global version of bound (6) in the LP case, in which we also relax the cited restrictions on u and 4, and we extend the method by proving the version of equation (5) where the scalar ~. is replaced by a diagonal matrix; this leads to a preconditioned MBF (abbreviated as PMBF) method, which gives us some additional control over the convergence and numerical stability (note the PBF option). We observe that the numerical stability of the MBF method in LP, as well as of the LP barrier methods, may depend on the format of the LP. In the Appendix, we show such a dependence and we also show how the numerical stability can be improved by using preconditioning and other means. We will organize the rest of our presentation as follows: in Section 2 we introduce some definitions and recall some simple auxiliary facts. In Section 3 we state and in Section 4 prove the global version of equation (5) where we remove the restriction on u and ~. and allow preconditioning. In Sections 5-8 we describe the resulting LP algorithms and, in particular, their Newton's iteration block. In Section 9 we estimate the convergence rate. In Section 10 we compare the resulting algorithms with the LP barrier method. In Section I l we show some further extensions of our algorithm. In Section 12 we briefly comment on the application of the MBF method to nonlinear programming. 2. SOME BASIC FACTS AND DEFINITIONS Let Rg.h denote the algebra of g x h real matrices, which are vectors if g = 1 or h = 1. For a matrix (or vector) W, let V/~j= (W)lj denote its entry (i,j), Wfr its transpose, W -T the inverse of II wll~ its 2-norm; for four matrices, vectors or scalars U ~ Re, h, V ~ Rj.h, W ~ Re, k, Z ~ Rj.k, we define the block matrix and the two block vectors Os, h and Ig denote the g x h null matrix and the g x g identity matrix, respectively; eh denotes the vector [1, 1..... 1]TERh,;; we will drop the size subscripts where the context makes them unnecessary. MBF method for LP 3 In denotes logarithms to the exponential base. For a function f(x) and for two vectors u and v, let ui = (u)~ and vi = (v)~ denote their ith coordinates, let uv denote the vector whose ith coordinate is (u)i (v)i, and let f(u) denote the vector whose i th coordinate is f((u)~) (which defines, in particular, In u and u d for any scalar d). The inequalities u t> v and u > v imply that (u)~ t> (v)i and (u)~ > (v)i for all i, respectively, diag(u) denotes the diagonal matrix with the coordinates of the vector u filling the diagonal. Dx, Du, D~ and Du.v denote the operators of differentiation with respect to all the coordinates of the vectors x, u, v and [u r, va']r, respectively. Given A • Rm. ~, p • R~. t, q • Rm. 1, m >1 n, denote r(x ) = Ax - q, and define a primal-dual pair of LP problems: x* = argrnin{pTxlr(x) >>. 0}, (7) u* = argmax{qrulA ru = p, u >i 0}, (Ta) where x • Rn. ~ and u • Rm. 1 [compare with equation (1)]. We will assume nondegeneracy of both primal and dual problems, so that rank A = n, and the complementary slackness conditions hold in the strict form [see relations (8) and (8a) below). To simplify the notation, we will assume that the constraints (r(x))~>>.O and consequently the components of the vectors u, u* and q and the rows of the matrix A have been numbered so that u_ =min(u*)t>O,(r(x*)),=O, for i ~<n, (8) i<~n r_ = min (r(x*))i > 0, (u*)i = 0, for i > n. (8a) i>n Thus the first n constraints of the primal problem (7), (Ax - q)i >>- 0 for i = 1..... n, are active, and the other m - n are passive at x = x*, the first n rows of A form the basis submatrix B, and we will respectively denote [B] [q,] r(x)=Frn(x,] [u,] Fu*l A = ' q= qN ' Lrs(x)J' u= us , U*=Lu~j, (8b) B • Rn.n, N • R ..... ; qs, rs(x), us, u* • Rn. l; qs, rs(x), us, u*, • R .... i. Remark 2. I Actually, we do not a priori know the subdivision of the matrix A and the vectors q, r(x), u and u* into their basis and nonbasis parts. Moreover, whenever such a subdivision is available, we may immediately compute x* and/or u* by solving the linear systems Brus = p, (9) Bx = qs (9a) and setting u* = 0. This observation reduces computing x* and u* to renumbering the original residuals (r(x))i and the associated dual variables (u),. so as to make relations (8) and (Sa) hold, or equivalently, to partitioning all the dual variables (u)~ into the basis ones, such that (u*); > 0, and the nonbasis ones, such that (u*)~ = 0.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages14 Page
-
File Size-