Chapter 3 Quadratic Programming

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 3 Quadratic Programming Optimization I; Chapter 3 56 Chapter 3 Quadratic Programming 3.1 Constrained quadratic programming problems A special case of the NLP arises when the objective functional f is quadratic and the constraints h; g are linear in x 2 lRn. Such an NLP is called a Quadratic Programming (QP) problem. Its general form is 1 minimize f(x) := xT Bx ¡ xT b (3.1a) 2 over x 2 lRn subject to A1x = c (3.1b) A2x · d ; (3.1c) n£n m£n p£n n where B 2 lR is symmetric, A1 2 lR ;A2 2 lR , and b 2 lR ; c 2 lRm ; d 2 lRp. As we shall see in this chapter, the QP (3.1a)-(3.1c) can be solved iteratively by active set strategies or interior point methods where each iteration requires the solution of an equality constrained QP problem. 3.2 Equality constrained quadratic programming If only equality constraints are imposed, the QP (3.1a)-(3.1c) reduces to 1 minimize f(x) := xT Bx ¡ xT b (3.2a) 2 over x 2 lRn subject to Ax = c ; (3.2b) where A 2 lRm£n ; m · n. For the time being we assume that A has full row rank m. The KKT conditions for the solution x¤ 2 lRn of the QP (3.2a),(3.2b) give rise to the following linear system µ ¶ µ ¶ µ ¶ BAT x¤ b = ; (3.3) A 0 ¸¤ c | {z } =: K where ¸¤ 2 lRm is the associated Lagrange multiplier. We denote by Z 2 lRn£(n¡m) the matrix whose columns span KerA, i.e., AZ = 0. Optimization I; Chapter 3 57 De¯nition 3.1 KKT matrix and reduced Hessian The matrix K in (3.3) is called the KKT matrix and the matrix ZT BZ is referred to as the reduced Hessian. Lemma 3.2 Existence and uniqueness Assume that A 2 lRm£n has full row rank m · n and that the reduced Hessian ZT BZ is positive de¯nite. Then, the KKT matrix K is nonsingular. Hence, the KKT system (3.3) has a unique solution (x¤; ¸¤). Proof: The proof is left as an exercise. ² Under the conditions of the previous lemma, it follows that the second order su±cient optimality conditions are satis¯ed so that x¤ is a strict local minimizer of the QP (3.2a),(3.2b). A direct argument shows that x¤ is in fact a global minimizer. Theorem 3.3 Global minimizer Let the assumptions of Lemma 3.2 be satis¯ed and let (x¤; ¸¤) be the unique solution of the KKT system (3.3). Then, x¤ is the unique global solution of the QP (3.2a),(3.2b). Proof: Let x 2 F be a feasible point, i.e., Ax = c, and p := x¤ ¡ x. Then, Ap = 0. Substituting x = x¤ ¡ p into the objective functional, we get 1 f(x) = (x¤ ¡ p)T B(x¤ ¡ p) ¡ (x¤ ¡ p)T b = 2 1 = pT Bp ¡ pT Bx¤ + pT b + f(x¤) : 2 Now, (3.3) implies Bx¤ = b ¡ AT ¸¤. Observing Ap = 0, we have pT Bx¤ = pT (b ¡ AT ¸¤) = pT b ¡ (Ap)T ¸¤ ; | {z } = 0 whence 1 f(x) = pT Bp + f(x¤) : 2 In view of p 2 Ker A, we can write p = Zu ; u 2 lRn¡m, and hence, 1 f(x) = uT ZT BZu + f(x¤) : 2 Since ZT BZ is positive de¯nite, we deduce f(x) > f(x¤). Consequently, x¤ is the unique global minimizer of the QP (3.2a),(3.2b). ² Optimization I; Chapter 3 58 3.3 Direct solution of the KKT system As far as the direct solution of the KKT system (3.3) is concerned, we distinguish between symmetric factorization and the range-space and null-space approach. 3.3.1 Symmetric inde¯nite factorization A possible way to solve the KKT system (3.3) is to provide a symmetric fac- torization of the KKT matrix according to P T KP = LDLT ; (3.4) where P is an appropriately chosen permutation matrix, L is lower triangular with diag(L) = I, and D is block diagonal. Based on (3.4), the KKT system (3.3) is solved as follows: µ ¶ b solve Ly = P T ; (3.5a) c solve Dy^ = y ; (3.5b) solve LT y~ =y ^ ; (3.5c) µ ¶ x¤ set = P y~ : (3.5d) ¸¤ 3.3.2 Range-space approach The range-space approach applies, if B 2 lRn£n is symmetric positive de¯nite. Block Gauss elimination of the primal variable x¤ leads to the Schur complement system AB¡1AT ¸¤ = AB¡1b ¡ c (3.6) with the Schur complement S 2 lRm£m given by S := AB¡1AT . The range- space approach is particularly e®ective, if ² B is well conditioned and easily invertible (e.g., B is diagonal or block- diagonal), ² B¡1 is known explicitly (e.g., by means of a quasi-Newton updating for- mula), ² the number m of equality constraints is small. Optimization I; Chapter 3 59 3.3.3 Null-space approach The null-space approach does not require regularity of B and thus has a wider range of applicability than the range-space approach. We assume that A 2 lRm£n has full row rank m and that ZT BZ is positive de¯nite, where Z 2 lRn£(n¡m) is the matrix whose columns span Ker A which can be computed by QR factorization (cf. Chapter 2.4). We partition the vector x¤ according to ¤ x = Y wY + ZwZ ; (3.7) n£m n£n m where Y 2 lR is such that [YZ] 2 lR is nonsingular and wY 2 lR ; wZ 2 lRn¡m. Substituting (3.7) into the second equation of (3.3), we obtain ¤ Ax = AY wY + |{z}AZ wZ = c ; (3.8) = 0 i.e., Y wY is a particular solution of Ax = c. Since A 2 lRm£n has rank m and [YZ] 2 lRn£n is nonsingular, the product m£m matrix A[YZ] = [AY 0] 2 lR is nonsingular. Hence, wY is well determined by (3.8). On the other hand, substituting (3.7) into the ¯rst equation of (3.3), we get T ¤ BY wY + BZwZ + A ¸ = b : Multiplying by ZT and observing ZT AT = (AZ)T = 0 yields T T T Z BZwZ = Z b ¡ Z BY wY : (3.9) The reduced KKT system (3.9) can be solved by a Cholesky factorization of the T (n¡m)£(n¡m) reduced Hessian Z BZ 2 lR . Once wY and wZ have been computed as the solutions of (3.8) and (3.9), x¤ is obtained according to (3.7). Finally, the Lagrange multiplier turns out to be the solution of the linear system arising from the multiplication of the ¯rst equation in (3.7) by Y T : (AY )T ¸¤ = Y T b ¡ Y T Bx¤ : (3.10) 3.4 Iterative solution of the KKT system If the direct solution of the KKT system (3.3) is computationally too costly, the alternative is to use an iterative method. An iterative solver can be ap- plied either to the entire KKT system or, as in the range-space and null-space approach, use the special structure of the KKT matrix. Optimization I; Chapter 3 60 3.4.1 Krylov methods The KKT matrix K 2 lR(n+m)£(n+m) is inde¯nite. In fact, if A has full row rank m, K has n positive and m negative eigenvalues. Therefore, for the iterative solution of (3.3) Krylov subspace methods like GMRES (Generalized Minimum RESidual) and QMR (Quasi Minimum Residual) are appropriate candidates. 3.4.2 Transforming range-space iterations We assume B 2 lRn£n to be symmetric positive de¯nite and suppose that B~ is some symmetric positive de¯nite and easily invertible approximation of B such that B~¡1B » I. (n+m)£(n+m) We choose KL 2 lR as the lower triangular block matrix µ ¶ I 0 K = ; (3.11) L ¡AB~¡1 I which gives rise to the regular splitting µ ¶ µ ¶ BA~ T B~(I ¡ B~¡1B) 0 K K = ¡ ; (3.12) L 0 S~ A(I ¡ B~¡1B) 0 | {z } | {z } =: M1 =: M2 » 0 where S~ 2 lRm£m is given by S~ := ¡ AB~¡1AT : (3.13) We set à := (x; ¸)T ; ® := (b; c)T : Given an iterate Ã(0) 2 lRn+m, we compute Ã(k) ; k 2 lN; by means of the transforming range-space iterations (k+1) (k) ¡1 (k) à = à + M1 KL(® ¡ Kà ) = (3.14) ¡1 (k) ¡1 = (I ¡ M1 KLK)à + M1 KL® ; k ¸ 0 : The transforming range-space iteration (3.14) will be implemented as follows: (k) (k) (k) T (k) d = (d1 ; d2 ) := ® ¡ Kà ; (3.15a) (k) (k) ~¡1 (k) (k) T KLd = (d1 ; ¡ AB d1 + d2 ) ; (3.15b) (k) (k) M1' = KLd ; (3.15c) Ã(k+1) = Ã(k) + '(k) : (3.15d) Optimization I; Chapter 3 61 3.4.3 Transforming null-space iterations We assume that x 2 lRn and ¸ 2 Rm admit the decomposition T m1 n¡m1 x = (x1; x2) ; x1 2 lR ; x2 2 lR ; (3.16a) T m1 m¡m1 ¸ = (¸1; ¸2) ; ¸1 2 lR ; ¸2 2 lR ; (3.16b) and that A 2 lRm£n and B 2 lRn£n can be partitioned by means of µ ¶ µ ¶ A A B B A = 11 12 ;B = 11 12 ; (3.17) A21 A22 B21 B22 m1£m1 where A11;B11 2 lR with nonsingular A11. Partitioning the right-hand side in (3.3) accordingly, the KKT system takes the form 0 1 0 1 0 1 T T ¤ B11 B12 j A11 A21 x1 b1 B T T C B ¤ C B C B B21 B22 j A12 A22 C B x2 C B b2 C B C B C B C B ¡¡ ¡¡ j ¡¡ ¡¡ C B ¡ C = B ¡ C : (3.18) @ A @ ¤ A @ A A11 A12 j 0 0 ¸1 c1 ¤ A21 A22 j 0 0 ¸2 c2 We rearrange (3.18) by exchanging the second and third rows and columns 0 1 0 1 0 1 T T ¤ B11 A11 j B12 A21 x1 b1 B C B ¤ C B C B A11 0 j A12 0 C B ¸1 C B c1 C B C B C B C B ¡¡ ¡¡ j ¡¡ ¡¡ C B ¡ C = B ¡ C : (3.19) @ T T A @ ¤ A @ A B21 A12 j B22 A22 x2 b2 ¤ A21 0 j A22 0 ¸2 c2 T Observing B12 = B21, in block form (3.19) can be written as follows µ ¶ µ ¶ µ ¶ ABT ä ® 1 = 1 ; (3.20) BD ä ® | {z } | {z2 } | {z2 } =: K =: ä =: ® ¤ ¤ ¤ T T where Ãi := (xi ; ¸i ) ; ®i := (bi; ci) ; 1 · i · 2.
Recommended publications
  • Advances in Interior Point Methods and Column Generation
    Advances in Interior Point Methods and Column Generation Pablo Gonz´alezBrevis Doctor of Philosophy University of Edinburgh 2013 Declaration I declare that this thesis was composed by myself and that the work contained therein is my own, except where explicitly stated otherwise in the text. (Pablo Gonz´alezBrevis) ii To my endless inspiration and angels on earth: Paulina, Crist´obal and Esteban iii Abstract In this thesis we study how to efficiently combine the column generation technique (CG) and interior point methods (IPMs) for solving the relaxation of a selection of integer programming problems. In order to obtain an efficient method a change in the column generation technique and a new reoptimization strategy for a primal-dual interior point method are proposed. It is well-known that the standard column generation technique suffers from un- stable behaviour due to the use of optimal dual solutions that are extreme points of the restricted master problem (RMP). This unstable behaviour slows down column generation so variations of the standard technique which rely on interior points of the dual feasible set of the RMP have been proposed in the literature. Among these tech- niques, there is the primal-dual column generation method (PDCGM) which relies on sub-optimal and well-centred dual solutions. This technique dynamically adjusts the column generation tolerance as the method approaches optimality. Also, it relies on the notion of the symmetric neighbourhood of the central path so sub-optimal and well-centred solutions are obtained. We provide a thorough theoretical analysis that guarantees the convergence of the primal-dual approach even though sub-optimal solu- tions are used in the course of the algorithm.
    [Show full text]
  • Chapter 8 Constrained Optimization 2: Sequential Quadratic Programming, Interior Point and Generalized Reduced Gradient Methods
    Chapter 8: Constrained Optimization 2 CHAPTER 8 CONSTRAINED OPTIMIZATION 2: SEQUENTIAL QUADRATIC PROGRAMMING, INTERIOR POINT AND GENERALIZED REDUCED GRADIENT METHODS 8.1 Introduction In the previous chapter we examined the necessary and sufficient conditions for a constrained optimum. We did not, however, discuss any algorithms for constrained optimization. That is the purpose of this chapter. The three algorithms we will study are three of the most common. Sequential Quadratic Programming (SQP) is a very popular algorithm because of its fast convergence properties. It is available in MATLAB and is widely used. The Interior Point (IP) algorithm has grown in popularity the past 15 years and recently became the default algorithm in MATLAB. It is particularly useful for solving large-scale problems. The Generalized Reduced Gradient method (GRG) has been shown to be effective on highly nonlinear engineering problems and is the algorithm used in Excel. SQP and IP share a common background. Both of these algorithms apply the Newton- Raphson (NR) technique for solving nonlinear equations to the KKT equations for a modified version of the problem. Thus we will begin with a review of the NR method. 8.2 The Newton-Raphson Method for Solving Nonlinear Equations Before we get to the algorithms, there is some background we need to cover first. This includes reviewing the Newton-Raphson (NR) method for solving sets of nonlinear equations. 8.2.1 One equation with One Unknown The NR method is used to find the solution to sets of nonlinear equations. For example, suppose we wish to find the solution to the equation: xe+=2 x We cannot solve for x directly.
    [Show full text]
  • A Lifted Linear Programming Branch-And-Bound Algorithm for Mixed Integer Conic Quadratic Programs Juan Pablo Vielma, Shabbir Ahmed, George L
    A Lifted Linear Programming Branch-and-Bound Algorithm for Mixed Integer Conic Quadratic Programs Juan Pablo Vielma, Shabbir Ahmed, George L. Nemhauser, H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, 765 Ferst Drive NW, Atlanta, GA 30332-0205, USA, {[email protected], [email protected], [email protected]} This paper develops a linear programming based branch-and-bound algorithm for mixed in- teger conic quadratic programs. The algorithm is based on a higher dimensional or lifted polyhedral relaxation of conic quadratic constraints introduced by Ben-Tal and Nemirovski. The algorithm is different from other linear programming based branch-and-bound algo- rithms for mixed integer nonlinear programs in that, it is not based on cuts from gradient inequalities and it sometimes branches on integer feasible solutions. The algorithm is tested on a series of portfolio optimization problems. It is shown that it significantly outperforms commercial and open source solvers based on both linear and nonlinear relaxations. Key words: nonlinear integer programming; branch and bound; portfolio optimization History: February 2007. 1. Introduction This paper deals with the development of an algorithm for the class of mixed integer non- linear programming (MINLP) problems known as mixed integer conic quadratic program- ming problems. This class of problems arises from adding integrality requirements to conic quadratic programming problems (Lobo et al., 1998), and is used to model several applica- tions from engineering and finance. Conic quadratic programming problems are also known as second order cone programming problems, and together with semidefinite and linear pro- gramming (LP) problems are special cases of the more general conic programming problems (Ben-Tal and Nemirovski, 2001a).
    [Show full text]
  • Cancer Treatment Optimization
    CANCER TREATMENT OPTIMIZATION A Thesis Presented to The Academic Faculty by Kyungduck Cha In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology April 2008 CANCER TREATMENT OPTIMIZATION Approved by: Professor Eva K. Lee, Advisor Professor Renato D.C. Monteiro H. Milton Stewart School of Industrial H. Milton Stewart School of Industrial and Systems Engineering and Systems Engineering Georgia Institute of Technology Georgia Institute of Technology Professor Earl R. Barnes Professor Nolan E. Hertel H. Milton Stewart School of Industrial G. W. Woodruff School of Mechanical and Systems Engineering Engineering Georgia Institute of Technology Georgia Institute of Technology Professor Ellis L. Johnson Date Approved: March 28, 2008 H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology To my parents, wife, and son. iii ACKNOWLEDGEMENTS First of all, I would like to express my sincere appreciation to my advisor, Dr. Eva Lee for her invaluable guidance, financial support, and understanding throughout my Ph.D program of study. Her inspiration provided me many ideas in tackling with my thesis problems and her enthusiasm encouraged me to produce better works. I would also like to thank Dr. Earl Barnes, Dr. Ellis Johnson, Dr. Renato D.C. Monteiro, and Dr. Nolan Hertel for their willingness to serve on my dissertation committee and for their valuable feedbacks. I also thank my friends, faculty, and staff, at Georgia Tech, who have contributed to this thesis in various ways. Specially, thank you to the members at the Center for Operations Research in Medicine and HealthCare for their friendship.
    [Show full text]
  • Quadratic Programming GIAN Short Course on Optimization: Applications, Algorithms, and Computation
    Quadratic Programming GIAN Short Course on Optimization: Applications, Algorithms, and Computation Sven Leyffer Argonne National Laboratory September 12-24, 2016 Outline 1 Introduction to Quadratic Programming Applications of QP in Portfolio Selection Applications of QP in Machine Learning 2 Active-Set Method for Quadratic Programming Equality-Constrained QPs General Quadratic Programs 3 Methods for Solving EQPs Generalized Elimination for EQPs Lagrangian Methods for EQPs 2 / 36 Introduction to Quadratic Programming Quadratic Program (QP) minimize 1 xT Gx + g T x x 2 T subject to ai x = bi i 2 E T ai x ≥ bi i 2 I; where n×n G 2 R is a symmetric matrix ... can reformulate QP to have a symmetric Hessian E and I sets of equality/inequality constraints Quadratic Program (QP) Like LPs, can be solved in finite number of steps Important class of problems: Many applications, e.g. quadratic assignment problem Main computational component of SQP: Sequential Quadratic Programming for nonlinear optimization 3 / 36 Introduction to Quadratic Programming Quadratic Program (QP) minimize 1 xT Gx + g T x x 2 T subject to ai x = bi i 2 E T ai x ≥ bi i 2 I; No assumption on eigenvalues of G If G 0 positive semi-definite, then QP is convex ) can find global minimum (if it exists) If G indefinite, then QP may be globally solvable, or not: If AE full rank, then 9ZE null-space basis Convex, if \reduced Hessian" positive semi-definite: T T ZE GZE 0; where ZE AE = 0 then globally solvable ... eliminate some variables using the equations 4 / 36 Introduction to Quadratic Programming Quadratic Program (QP) minimize 1 xT Gx + g T x x 2 T subject to ai x = bi i 2 E T ai x ≥ bi i 2 I; Feasible set may be empty ..
    [Show full text]
  • Linear Programming
    Lecture 18 Linear Programming 18.1 Overview In this lecture we describe a very general problem called linear programming that can be used to express a wide variety of different kinds of problems. We can use algorithms for linear program- ming to solve the max-flow problem, solve the min-cost max-flow problem, find minimax-optimal strategies in games, and many other things. We will primarily discuss the setting and how to code up various problems as linear programs At the end, we will briefly describe some of the algorithms for solving linear programming problems. Specific topics include: • The definition of linear programming and simple examples. • Using linear programming to solve max flow and min-cost max flow. • Using linear programming to solve for minimax-optimal strategies in games. • Algorithms for linear programming. 18.2 Introduction In the last two lectures we looked at: — Bipartite matching: given a bipartite graph, find the largest set of edges with no endpoints in common. — Network flow (more general than bipartite matching). — Min-Cost Max-flow (even more general than plain max flow). Today, we’ll look at something even more general that we can solve algorithmically: linear pro- gramming. (Except we won’t necessarily be able to get integer solutions, even when the specifi- cation of the problem is integral). Linear Programming is important because it is so expressive: many, many problems can be coded up as linear programs (LPs). This especially includes problems of allocating resources and business 95 18.3. DEFINITION OF LINEAR PROGRAMMING 96 supply-chain applications. In business schools and Operations Research departments there are entire courses devoted to linear programming.
    [Show full text]
  • Integer Linear Programs
    20 ________________________________________________________________________________________________ Integer Linear Programs Many linear programming problems require certain variables to have whole number, or integer, values. Such a requirement arises naturally when the variables represent enti- ties like packages or people that can not be fractionally divided — at least, not in a mean- ingful way for the situation being modeled. Integer variables also play a role in formulat- ing equation systems that model logical conditions, as we will show later in this chapter. In some situations, the optimization techniques described in previous chapters are suf- ficient to find an integer solution. An integer optimal solution is guaranteed for certain network linear programs, as explained in Section 15.5. Even where there is no guarantee, a linear programming solver may happen to find an integer optimal solution for the par- ticular instances of a model in which you are interested. This happened in the solution of the multicommodity transportation model (Figure 4-1) for the particular data that we specified (Figure 4-2). Even if you do not obtain an integer solution from the solver, chances are good that you’ll get a solution in which most of the variables lie at integer values. Specifically, many solvers are able to return an ‘‘extreme’’ solution in which the number of variables not lying at their bounds is at most the number of constraints. If the bounds are integral, all of the variables at their bounds will have integer values; and if the rest of the data is integral, many of the remaining variables may turn out to be integers, too.
    [Show full text]
  • Chapter 11: Basic Linear Programming Concepts
    CHAPTER 11: BASIC LINEAR PROGRAMMING CONCEPTS Linear programming is a mathematical technique for finding optimal solutions to problems that can be expressed using linear equations and inequalities. If a real-world problem can be represented accurately by the mathematical equations of a linear program, the method will find the best solution to the problem. Of course, few complex real-world problems can be expressed perfectly in terms of a set of linear functions. Nevertheless, linear programs can provide reasonably realistic representations of many real-world problems — especially if a little creativity is applied in the mathematical formulation of the problem. The subject of modeling was briefly discussed in the context of regulation. The regulation problems you learned to solve were very simple mathematical representations of reality. This chapter continues this trek down the modeling path. As we progress, the models will become more mathematical — and more complex. The real world is always more complex than a model. Thus, as we try to represent the real world more accurately, the models we build will inevitably become more complex. You should remember the maxim discussed earlier that a model should only be as complex as is necessary in order to represent the real world problem reasonably well. Therefore, the added complexity introduced by using linear programming should be accompanied by some significant gains in our ability to represent the problem and, hence, in the quality of the solutions that can be obtained. You should ask yourself, as you learn more about linear programming, what the benefits of the technique are and whether they outweigh the additional costs.
    [Show full text]
  • Gauss-Newton SQP
    TEMPO Spring School: Theory and Numerics for Nonlinear Model Predictive Control Exercise 3: Gauss-Newton SQP J. Andersson M. Diehl J. Rawlings M. Zanon University of Freiburg, March 27, 2015 Gauss-Newton sequential quadratic programming (SQP) In the exercises so far, we solved the NLPs with IPOPT. IPOPT is a popular open-source primal- dual interior point code employing so-called filter line-search to ensure global convergence. Other NLP solvers that can be used from CasADi include SNOPT, WORHP and KNITRO. In the following, we will write our own simple NLP solver implementing sequential quadratic programming (SQP). (0) (0) Starting from a given initial guess for the primal and dual variables (x ; λg ), SQP solves the NLP by iteratively computing local convex quadratic approximations of the NLP at the (k) (k) current iterate (x ; λg ) and solving them by using a quadratic programming (QP) solver. For an NLP of the form: minimize f(x) x (1) subject to x ≤ x ≤ x; g ≤ g(x) ≤ g; these quadratic approximations take the form: 1 | 2 (k) (k) (k) (k) | minimize ∆x rxL(x ; λg ; λx ) ∆x + rxf(x ) ∆x ∆x 2 subject to x − x(k) ≤ ∆x ≤ x − x(k); (2) @g g − g(x(k)) ≤ (x(k)) ∆x ≤ g − g(x(k)); @x | | where L(x; λg; λx) = f(x) + λg g(x) + λx x is the so-called Lagrangian function. By solving this (k) (k+1) (k) (k+1) QP, we get the (primal) step ∆x := x − x as well as the Lagrange multipliers λg (k+1) and λx .
    [Show full text]
  • The Central Curve in Linear Programming
    THE CENTRAL CURVE IN LINEAR PROGRAMMING JESUS´ A. DE LOERA, BERND STURMFELS, AND CYNTHIA VINZANT Abstract. The central curve of a linear program is an algebraic curve specified by linear and quadratic constraints arising from complementary slackness. It is the union of the various central paths for minimizing or maximizing the cost function over any region in the associated hyperplane arrangement. We determine the degree, arithmetic genus and defining prime ideal of the central curve, thereby answering a question of Bayer and Lagarias. These invariants, along with the degree of the Gauss image of the curve, are expressed in terms of the matroid of the input matrix. Extending work of Dedieu, Malajovich and Shub, this yields an instance-specific bound on the total curvature of the central path, a quantity relevant for interior point methods. The global geometry of central curves is studied in detail. 1. Introduction We consider the standard linear programming problem in its primal and dual formulation: (1) Maximize cT x subject to Ax = b and x ≥ 0; (2) Minimize bT y subject to AT y − s = c and s ≥ 0: Here A is a fixed matrix of rank d having n columns. The vectors c 2 Rn and b 2 image(A) may vary. In most of our results we assume that b and c are generic. This implies that both the primal optimal solution and the dual optimal solution are unique. We also assume that the problem (1) is bounded and both problems (1) and (2) are strictly feasible. Before describing our contributions, we review some basics from the theory of linear pro- gramming [26,33].
    [Show full text]
  • Chapter 7. Linear Programming and Reductions
    Chapter 7 Linear programming and reductions Many of the problems for which we want algorithms are optimization tasks: the shortest path, the cheapest spanning tree, the longest increasing subsequence, and so on. In such cases, we seek a solution that (1) satisfies certain constraints (for instance, the path must use edges of the graph and lead from s to t, the tree must touch all nodes, the subsequence must be increasing); and (2) is the best possible, with respect to some well-defined criterion, among all solutions that satisfy these constraints. Linear programming describes a broad class of optimization tasks in which both the con- straints and the optimization criterion are linear functions. It turns out an enormous number of problems can be expressed in this way. Given the vastness of its topic, this chapter is divided into several parts, which can be read separately subject to the following dependencies. Flows and matchings Introduction to linear programming Duality Games and reductions Simplex 7.1 An introduction to linear programming In a linear programming problem we are given a set of variables, and we want to assign real values to them so as to (1) satisfy a set of linear equations and/or linear inequalities involving these variables and (2) maximize or minimize a given linear objective function. 201 202 Algorithms Figure 7.1 (a) The feasible region for a linear program. (b) Contour lines of the objective function: x1 + 6x2 = c for different values of the profit c. x x (a) 2 (b) 2 400 400 Optimum point Profit = $1900 ¡ ¡ ¡ ¡ ¡ 300 ¢¡¢¡¢¡¢¡¢ 300 ¡ ¡ ¡ ¡ ¡ ¢¡¢¡¢¡¢¡¢ ¡ ¡ ¡ ¡ ¡ ¢¡¢¡¢¡¢¡¢ 200 200 c = 1500 ¡ ¡ ¡ ¡ ¡ ¢¡¢¡¢¡¢¡¢ ¡ ¡ ¡ ¡ ¡ ¢¡¢¡¢¡¢¡¢ c = 1200 ¡ ¡ ¡ ¡ ¡ 100 ¢¡¢¡¢¡¢¡¢ 100 ¡ ¡ ¡ ¡ ¡ ¢¡¢¡¢¡¢¡¢ c = 600 ¡ ¡ ¡ ¡ ¡ ¢¡¢¡¢¡¢¡¢ x1 x1 0 100 200 300 400 0 100 200 300 400 7.1.1 Example: profit maximization A boutique chocolatier has two products: its flagship assortment of triangular chocolates, called Pyramide, and the more decadent and deluxe Pyramide Nuit.
    [Show full text]
  • IMPROVED PARTICLE SWARM OPTIMIZATION for NON-LINEAR PROGRAMMING PROBLEM with BARRIER METHOD Raju Prajapati1 1University of Engineering & Management, Kolkata, W.B
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Gyandhara International Academic Publication (GIAP): Journals International Journal of Students’ Research in Technology & Management eISSN: 2321-2543, Vol 5, No 4, 2017, pp 72-80 https://doi.org/10.18510/ijsrtm.2017.5410 IMPROVED PARTICLE SWARM OPTIMIZATION FOR NON-LINEAR PROGRAMMING PROBLEM WITH BARRIER METHOD Raju Prajapati1 1University Of Engineering & Management, Kolkata, W.B. India Om Prakash Dubey2, Randhir Kumar3 2,3Bengal College Of Engineering & Technology, Durgapur, W.B. India email: [email protected] Article History: Submitted on 10th September, Revised on 07th November, Published on 30th November 2017 Abstract: The Non-Linear Programming Problems (NLPP) are computationally hard to solve as compared to the Linear Programming Problems (LPP). To solve NLPP, the available methods are Lagrangian Multipliers, Sub gradient method, Karush-Kuhn-Tucker conditions, Penalty and Barrier method etc. In this paper, we are applying Barrier method to convert the NLPP with equality constraint to an NLPP without constraint. We use the improved version of famous Particle Swarm Optimization (PSO) method to obtain the solution of NLPP without constraint. SCILAB programming language is used to evaluate the solution on sample problems. The results of sample problems are compared on Improved PSO and general PSO. Keywords. Non-Linear Programming Problem, Barrier Method, Particle Swarm Optimization, Equality and Inequality Constraints. INTRODUCTION A large number of algorithms are available to solve optimization problems, inspired by evolutionary process in nature. e.g. Genetic Algorithm, Particle Swarm Optimization, Ant Colony Optimization, Bee Colony Optimization, Memetic Algorithm, etc.
    [Show full text]