The Simplex Method for Quadratic Programming Author(S): Philip Wolfe Source: Econometrica, Vol

The Simplex Method for Quadratic Programming Author(S): Philip Wolfe Source: Econometrica, Vol

The Simplex Method for Quadratic Programming Author(s): Philip Wolfe Source: Econometrica, Vol. 27, No. 3, (Jul., 1959), pp. 382-398 Published by: The Econometric Society Stable URL: http://www.jstor.org/stable/1909468 Accessed: 22/07/2008 04:45 Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/action/showPublisher?publisherCode=econosoc. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. JSTOR is a not-for-profit organization founded in 1995 to build trusted digital archives for scholarship. We work with the scholarly community to preserve their work and the materials they rely upon, and to build a common research platform that promotes the discovery and use of these resources. For more information about JSTOR, please contact [email protected]. http://www.jstor.org Econometrica, Vol. 27, 3 (July 1959) THE SIMPLEX METHOD FOR QUADRATIC PROGRAMMING BY PHILIP WOLFE A computational procedure is given for finding the minimum of a quadratic function of variables subject to linear inequality constraints. The procedure is analogous to the Simplex Method for linear programming, being based on the Barankin-Dorfman procedure for this problem. 1. INTRODUCTION IN THIS PAPER, by "quadratic programming" we shall understand the problem of determining values of several real variables, subject to linear inequality constraints, which yield the extreme value of a quadratic function. Besides being a step on the way toward solution of the elaborate nonlinear programming problems which economic models often present, a usable com- putational procedure for quadratic programming can be applied to a number of problems of interest in themselves: Regression. To find the best least-squares fit to given data, where certain parameters are known a priori to satisfy inequality constraints (e.g., being nonnegative). Efficient production. Maximization of profit, assuming linear production functions and linearly varying marginal costs. "Portelolio" problem.' To find a combination of random variables having given expectation and minimum variance. Convex programming. To find the minimum of a general convex function under linear constraints using a quadratic approximation. Let the variables of the problem constitute the n-vector x = (xi, .,x.)' (' will denote transposition; we take x to be a column vector, that is, an n by 1 matrix). Letting A be an m by n matrix, and b an m by 1, we shall express the linear constraints of the problem by (1) x>O, Ax b, that is, x ?0(> - 1 .,n), ZXnI aijx= bi (i - 1, . We recall that any combination of linear equation and inequality constraints can be written in the form (1) if additional variables are allowed into the problem. Let p be a 1 by n matrix, and C be an n by n symmetric matrix. We shall write the objective function, the (partly) quadratic function to be extremized subject to (1), as (2) I(2,x) - A px -1 1/2 x'Cx, I)ue to Markowitz [6]. 382 QUADRATIC PROGRAMMING 383 or /(A,x) = E j 1jxj + 1/2 J,k XjCjkXk, where A, a single nonnegative real parameter, can be chosen as convenient. The problem can now be stated as: The quadratic problem for A > 0: (3) Minimize /(A,x) =Ax + 1/2 x'Cx subject to x > 0, Ax b. An important restriction must be placed on the quadratic part, C, of the objective function in order to ensure the success of the computational method: the function / must be convex, that is, C must be positive semidefinite. This condition apparently essential, in the present state of the art, to all non- linear programming schemes ensures that any local minimum encountered in the problem will be the global solution desired. Algebraically, the assertion of positive semidefiniteness for C is that (4) x'Cx > 0 for all x. In economic problems, it is the ascription of nonincreasing returns to scale for all activities, since the marginal cost of changing from the program x to the program x + Ax is given by dt(A,x + t Ax) = x A-xC Ax -F tAx'C A4ax, which will be a nondecreasing function of t. We shall assume it from now on. A more detailed discussion of the role of this property in quadratic program- ming is given in [4]. A number of proposals for the computational solution of quadratic pro- gramnmingproblems have been made in the last two years; those which seem suited to high-speed digital computers are given in References [1]--[6] below. Barankin and Dorfman [1] first pointed out the linear formulation of the quadratic problem, inspiring the present approach; our Section 2 is taken, with changed notation, from Sections 1 and 3 of [1]. The principal respect in which the present method differs from these is in using only the computational mechanisms of the Simplex Method for linear programming problems. It is therefore a simple matter to convert a computing machine code for linear programming into one for quadratic programming; the SHARE linear programming code for the IBM 704 re- quires modification in eleven instructions for this purpose. In the sequel this method is developed in two forms, "short" and "long." The computations of the long form are like those of the short, but are aimed at solving the quadratic problem (3) for all A > 0 and at the same time avoid- ing certain restrictions on the use of the short form. The table below summa- rizes the conditions for use of these methods. The estimate of the number of Simplex Method changes-of-basis needed to solve the problem is based on experience like that described in Section 6. 384 PHILIP WOLFE Solution of (3) by: Short Form Long Form Either A = O or C .. Conditions E . C positive semidefinite positive definite Solution obtained for fixed A all A > 0 Size of equivalent linear up to m + n equations, up to m + n equations, program m + 3n variables m + 3n + 1 variables Estimated number of basis changes for solution 2. PRELIMINARIES Since we are interested, in part, in the solution of the quadratic problem for all A > 0, let us define for A > 0: F(A) - Min{IApx + 1/2 x'Cx : x > 0, Ax -- b}. Quite a bit of information about F(A) can be obtained without calculating it. Throughout it will be assumed, of course, that there exist feasible x, i.e., x > 0 such that Ax b. Nevertheless, we may have F(A) -ao for some (and hence all) A > 0. First, an important feature of the positive semidefiniteness of C is given by LEMMA 1. x'Cx 0 implies Cx = 0. PROOF. For any n-vector y we have 0 < (y + tx)'C(y + tx) y'Cy + 2 ty'Cx for any numbert, whence y'Cx- 0; Cx - 0 follows at once. This leads to LEMMA 2. For any A > 0, the "solution set" of all feasible x such thatf(A,x) F(A) is the intersection of a linear manifold with the constraint set, and px is constant on this set for A > 0. PROOF. Let feasible points x, y be given such that f(A,x) f(A,y) F(A). Letting w y-x, for any 0 < t < 1 the point x + tw belongs to the con- straint set; since f is convex, and f(A, x + tw) is minimal for t 0 and t = 1, wehavef(), x + tw) f (A,x)forall0 < t < 1,or)Ap(x + tw) + 1/2(x + tw)' C(x + tw) = Apx + 1/2 x'Cx, which simplifies to (Apw + x'Cw)t + 1/2 w'Cw t2 - 0 [O < t < 1]. Thus w'Cw 0, whence by Lemma 1 (5) Cw 0, and hence PW =0. QUADRATIC PROGRAMMING 385 Conversely, it is clear that if f(A,x) = F(A), if w satisfies (5), and if x + tw is feasible, then f/(, x + tw) = F(), so that the complete solution set for ) given is the intersection of the constraint set with the linear manifold { x + tw}. Equation (5) yields finally px py for any two solutions. If now for any A > 0 we choose a feasible solution XAsuch that /(f,xk) F(A), then by Lemma 2 the value PxAis independent of the choice of XA. THEOREM 1. For A > 0, F(A) is a concavefunction; PxAis monotonenon- increasing; and XAis a solution y of the problem: Min{y'Cy :y >0, Ay b, py <PxA}. PROOF.Since fo(A,x) is linear in A, the function F(A) is the infimum of a family of linear functions, and hence concave. The trend of PxAis an instance of a quite general fact. Take any A and P. Since XAminimizes f(A,x), we have ApPXx+ 1/2 x, CxA< APxIS+ 1/2 x, CxIS; and since xIS minimizes f(,,x), we have Ipx,I + 1/2x, Cx,I < IIpxx+ 1/2 xACxA. Adding these inequalities and rearranging, we get -A) Px/j <? (- A)PxA, which yields Pxl,S< PxAfor > A. Finally, since xAdoes minimize Apx + 1/2 x'Cx, any y such that y'Cy < x',Cx, will give py > PxA,, which proves the last statement.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    18 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us