1 Gradient-Based Optimization

Total Page:16

File Type:pdf, Size:1020Kb

1 Gradient-Based Optimization 1 Gradient-Based Optimization 1.1 General Algorithm for Smooth Functions All algorithms for unconstrained gradient-based optimization can be described as follows. We start with iteration number k = 0 and a starting point, xk. 1. Test for convergence. If the conditions for convergence are satisfied, then we can stop and xk is the solution. 2. Compute a search direction. Compute the vector pk that defines the direction in n-space along which we will search. 3. Compute the step length. Find a positive scalar, αk such that f(xk + αkpk) < f(xk). 4. Update the design variables. Set xk+1 = xk + αkpk, k = k + 1 and go back to 1. xk+1 = xk + αkpk (1) | {z } ∆xk There are two subproblems in this type of algorithm for each major iteration: computing the search direction pk and finding the step size (controlled by αk). The difference between the various types of gradient-based algorithms is the method that is used for computing the search direction. AA222: Introduction to MDO 1 1.2 Optimality Conditions T Consider a function f(x) where x is the n-vector x = [x1; x2; : : : ; xn] . The gradient vector of this function is given by the partial derivatives with respect to each of the independent variables, 2 @f 3 6@x 7 6 1 7 6 @f 7 6 7 rf(x) ≡ g(x) ≡ 6@x 7 (2) 6 . 2 7 6 . 7 6 7 4 @f 5 @xn In the multivariate case, the gradient vector is perpendicular to the the hyperplane tangent to the contour surfaces of constant f. AA222: Introduction to MDO 2 Higher derivatives of multi-variable functions are defined as in the single-variable case, but note that the number of gradient components increase by a factor of n for each differentiation. While the gradient of a function of n variables is an n-vector, the \second derivative" of an n-variable function is defined by n2 partial derivatives (the derivatives of the n first partial derivatives with respect to the n variables): @2f @2f ; i 6= j and ; i = j: (3) 2 @xi@xj @xi 2 If the partial derivatives @f=@xi, @f=@xj and @ f=@xi@xj are continuous and f is single 2 2 2 valued, then @ f=@xi@xj exists and @ f=@xi@xj = @ f=@xj@xi. Therefore the second- order partial derivatives can be represented by a square symmetric matrix called the Hessian matrix, 2 @2f @2f 3 ··· 2 6 @ x1 @x1@xn 7 2 6 . 7 r f(x) ≡ H(x) ≡ 6 . 7 (4) 6 7 4 @2f @2f 5 ··· ; 2 @xn@x1 @ xn which contains n(n + 1)=2 independent elements. If f is quadratic, the Hessian of f is constant, and the function can be expressed as 1 f(x) = xT Hx + gT x + α. (5) 2 AA222: Introduction to MDO 3 As in the single-variable case the optimality conditions can be derived from the Taylor-series expansion of f about x∗: 1 f(x∗ + "p) = f(x∗) + "pT g(x∗) + "2pT H(x∗ + εθp)p; (6) 2 where 0 ≤ θ ≤ 1, " is a scalar, and p is an n-vector. For x∗ to be a local minimum, then for any vector p there must be a finite " such that f(x∗ + "p) ≥ f(x∗), i.e. there is a neighborhood in which this condition holds. If this condition is satisfied, then f(x∗ + "p) − f(x∗) ≥ 0 and the first and second order terms in the Taylor-series expansion must be greater than or equal to zero. As in the single variable case, and for the same reason, we start by considering the first order terms. Since p is an arbitrary vector and " can be positive or negative, every component of the gradient vector g(x∗) must be zero. Now we have to consider the second order term, "2pT H(x∗ + εθp)p. For this term to be non-negative, H(x∗ + εθp) has to be positive semi-definite, and by continuity, the Hessian at the optimum, H(x∗) must also be positive semi-definite. AA222: Introduction to MDO 4 Necessary conditions (for a local minimum): kg(x∗)k = 0 and H(x∗) is positive semi-definite: (7) Sufficient conditions (for a strong local minimum): kg(x∗)k = 0 and H(x∗) is positive definite: (8) Some definitions from linear algebra that might be helpful: n×n T n • The matrix H 2 R is positive definite if p Hp > 0 for all nonzero vectors p 2 R (If H = HT then all the eigenvalues of H are strictly positive) n×n T n • The matrix H 2 R is positive semi-definite if p Hp ≥ 0 for all vectors p 2 R (If H = HT then the eigenvalues of H are positive or zero) n×n n T • The matrix H 2 R is indefinite if there exists p; q 2 R such that p Hp > 0 and qT Hq < 0. (If H = HT then H has eigenvalues of mixed sign.) n×n T n • The matrix H 2 R is negative definite if p Hp < 0 for all nonzero vectors p 2 R (If H = HT then all the eigenvalues of H are strictly negative) AA222: Introduction to MDO 5 2-D Critical Point Examples AA222: Introduction to MDO 6 Example 1.1: Critical Points of a Function Consider the function: 2 2 3 4 f(x) = 1:5x1 + x2 − 2x1x2 + 2x1 + 0:5x1 Find all stationary points of f and classify them. Solve rf(x) = 0, get three solutions: (0; 0) local minimum p p 1=2(−3 − 7; −3 − 7) global minimum p p 1=2(−3 + 7; −3 + 7) saddle point To establish the type of point, we have to determine if the Hessian is positive definite and compare the values of the function at the points. AA222: Introduction to MDO 7 2 2 3 4 Figure 1: Critical points of f(x) = 1:5x1 + x2 − 2x1x2 + 2x1 + 0:5x1 AA222: Introduction to MDO 8 1.3 Steepest Descent Method The steepest descent method uses the gradient vector at each point as the search direction for each iteration. As mentioned previously, the gradient vector is orthogonal to the plane tangent to the isosurfaces of the function. The gradient vector at a point, g(xk), is also the direction of maximum rate of change (maximum increase) of the function at that point. This rate of change is given by the norm, kg(xk)k. Steepest descent algorithm: 1. Select starting point x0, and convergence parameters "g;"a and "r. 2. Compute g(xk) ≡ rf(xk). If kg(xk)k ≤ "g then stop. Otherwise, compute the normalized search direction to pk = −g(xk)=kg(xk)k. 3. Perform line search to find step length αk in the direction of pk. 4. Update the current point, xk+1 = xk + αpk. 5. Evaluate f(xk+1). If the condition jf(xk+1) − f(xk)j ≤ "a + "rjf(xk)j is satisfied for two successive iterations then stop. Otherwise, set k = k + 1, xk+1 = xk + 1 and return to step 2. Here, jf(xk+1) − f(xk)j ≤ "a + "rjF (xk)j is a check for the successive reductions of f. "a −6 is the absolute tolerance on the change in function value (usually small ≈ 10 ) and "r is the relative tolerance (usually set to 0.01). AA222: Introduction to MDO 9 If we use an exact line search, the steepest descent direction at each iteration is orthogonal to the previous one, i.e., df(xk+1) @f(xk+1) @xk+1 T = 0 ) = 0 ) r f(xk+1)pk = 0 ) (9) dα @xk+1 @α T −g (xk+1)g(xk) = 0 (10) Therefore the method \zigzags" in the design space and is rather inefficient. Although a substantial decrease may be observed in the first few iterations, the method is usually very slow after that. In particular, while the algorithm is guaranteed to converge, it may take an infinite number of iterations. The rate of convergence is linear. AA222: Introduction to MDO 10 For steepest descent and other gradient methods that do not produce well-scaled search directions, we need to use other information to guess a step length. One strategy is to assume that the first-order change in xk will be the same as the one obtained T T in the previous step. i.e, that αg¯ k pk = αk−1gk−1pk−1 and therefore: T g pk−1 α¯ = α k−1 : (11) k−1 T gk pk AA222: Introduction to MDO 11 −(10x2+x2) Example 1.2: Steepest Descent Applied to f(x1; x2) = 1 − e 1 2 Figure 2: Solution path of the steepest descent method AA222: Introduction to MDO 12 1.4 Conjugate Gradient Method A small modification to the steepest descent method takes into account the history of the gradients to move more directly towards the optimum. Suppose we want to minimize a convex quadratic function 1 φ(x) = xT Ax − bT x (12) 2 where A is an n × n matrix that is symmetric and positive definite. Differentiating this with respect to x we obtain, rφ(x) = Ax − b ≡ r(x): (13) Minimizing the quadratic is thus equivalent to solving the linear system, Ax = b: (14) The conjugate gradient method is an iterative method for solving linear systems of equations such as this one. A set of nonzero vectors fp0; p1; : : : ; pn−1g is conjugate with respect to A if T pi Apj = 0; for all i 6= j: (15) AA222: Introduction to MDO 13 Suppose that we start from a point x0 and a set of directions fp0; p1; : : : ; pn−1g to generate a sequence fxkg where xk+1 = xk + αkpk (16) where αk is the minimizer of φ along xk + αpk, given by T r pk α = − k (17) k T pk Apk We will see that for any x0 the sequence fxkg generated by the conjugate direction algorithm converges to the solution of the linear system in at most n steps.
Recommended publications
  • Advances in Interior Point Methods and Column Generation
    Advances in Interior Point Methods and Column Generation Pablo Gonz´alezBrevis Doctor of Philosophy University of Edinburgh 2013 Declaration I declare that this thesis was composed by myself and that the work contained therein is my own, except where explicitly stated otherwise in the text. (Pablo Gonz´alezBrevis) ii To my endless inspiration and angels on earth: Paulina, Crist´obal and Esteban iii Abstract In this thesis we study how to efficiently combine the column generation technique (CG) and interior point methods (IPMs) for solving the relaxation of a selection of integer programming problems. In order to obtain an efficient method a change in the column generation technique and a new reoptimization strategy for a primal-dual interior point method are proposed. It is well-known that the standard column generation technique suffers from un- stable behaviour due to the use of optimal dual solutions that are extreme points of the restricted master problem (RMP). This unstable behaviour slows down column generation so variations of the standard technique which rely on interior points of the dual feasible set of the RMP have been proposed in the literature. Among these tech- niques, there is the primal-dual column generation method (PDCGM) which relies on sub-optimal and well-centred dual solutions. This technique dynamically adjusts the column generation tolerance as the method approaches optimality. Also, it relies on the notion of the symmetric neighbourhood of the central path so sub-optimal and well-centred solutions are obtained. We provide a thorough theoretical analysis that guarantees the convergence of the primal-dual approach even though sub-optimal solu- tions are used in the course of the algorithm.
    [Show full text]
  • A Learning-Based Iterative Method for Solving Vehicle
    Published as a conference paper at ICLR 2020 ALEARNING-BASED ITERATIVE METHOD FOR SOLVING VEHICLE ROUTING PROBLEMS Hao Lu ∗ Princeton University, Princeton, NJ 08540 [email protected] Xingwen Zhang ∗ & Shuang Yang Ant Financial Services Group, San Mateo, CA 94402 fxingwen.zhang,[email protected] ABSTRACT This paper is concerned with solving combinatorial optimization problems, in par- ticular, the capacitated vehicle routing problems (CVRP). Classical Operations Research (OR) algorithms such as LKH3 (Helsgaun, 2017) are inefficient and dif- ficult to scale to larger-size problems. Machine learning based approaches have recently shown to be promising, partly because of their efficiency (once trained, they can perform solving within minutes or even seconds). However, there is still a considerable gap between the quality of a machine learned solution and what OR methods can offer (e.g., on CVRP-100, the best result of learned solutions is between 16.10-16.80, significantly worse than LKH3’s 15.65). In this paper, we present “Learn to Improve” (L2I), the first learning based approach for CVRP that is efficient in solving speed and at the same time outperforms OR methods. Start- ing with a random initial solution, L2I learns to iteratively refine the solution with an improvement operator, selected by a reinforcement learning based controller. The improvement operator is selected from a pool of powerful operators that are customized for routing problems. By combining the strengths of the two worlds, our approach achieves the new state-of-the-art results on CVRP, e.g., an average cost of 15.57 on CVRP-100. 1 INTRODUCTION In this paper, we focus on an important class of combinatorial optimization, vehicle routing problems (VRP), which have a wide range of applications in logistics.
    [Show full text]
  • 12. Coordinate Descent Methods
    EE 546, Univ of Washington, Spring 2014 12. Coordinate descent methods theoretical justifications • randomized coordinate descent method • minimizing composite objectives • accelerated coordinate descent method • Coordinate descent methods 12–1 Notations consider smooth unconstrained minimization problem: minimize f(x) x RN ∈ n coordinate blocks: x =(x ,...,x ) with x RNi and N = N • 1 n i ∈ i=1 i more generally, partition with a permutation matrix: U =[PU1 Un] • ··· n T xi = Ui x, x = Uixi i=1 X blocks of gradient: • f(x) = U T f(x) ∇i i ∇ coordinate update: • x+ = x tU f(x) − i∇i Coordinate descent methods 12–2 (Block) coordinate descent choose x(0) Rn, and iterate for k =0, 1, 2,... ∈ 1. choose coordinate i(k) 2. update x(k+1) = x(k) t U f(x(k)) − k ik∇ik among the first schemes for solving smooth unconstrained problems • cyclic or round-Robin: difficult to analyze convergence • mostly local convergence results for particular classes of problems • does it really work (better than full gradient method)? • Coordinate descent methods 12–3 Steepest coordinate descent choose x(0) Rn, and iterate for k =0, 1, 2,... ∈ (k) 1. choose i(k) = argmax if(x ) 2 i 1,...,n k∇ k ∈{ } 2. update x(k+1) = x(k) t U f(x(k)) − k i(k)∇i(k) assumptions f(x) is block-wise Lipschitz continuous • ∇ f(x + U v) f(x) L v , i =1,...,n k∇i i −∇i k2 ≤ ik k2 f has bounded sub-level set, in particular, define • ⋆ R(x) = max max y x 2 : f(y) f(x) y x⋆ X⋆ k − k ≤ ∈ Coordinate descent methods 12–4 Analysis for constant step size quadratic upper bound due to block coordinate-wise
    [Show full text]
  • Process Optimization
    Process Optimization Mathematical Programming and Optimization of Multi-Plant Operations and Process Design Ralph W. Pike Director, Minerals Processing Research Institute Horton Professor of Chemical Engineering Louisiana State University Department of Chemical Engineering, Lamar University, April, 10, 2007 Process Optimization • Typical Industrial Problems • Mathematical Programming Software • Mathematical Basis for Optimization • Lagrange Multipliers and the Simplex Algorithm • Generalized Reduced Gradient Algorithm • On-Line Optimization • Mixed Integer Programming and the Branch and Bound Algorithm • Chemical Production Complex Optimization New Results • Using one computer language to write and run a program in another language • Cumulative probability distribution instead of an optimal point using Monte Carlo simulation for a multi-criteria, mixed integer nonlinear programming problem • Global optimization Design vs. Operations • Optimal Design −Uses flowsheet simulators and SQP – Heuristics for a design, a superstructure, an optimal design • Optimal Operations – On-line optimization – Plant optimal scheduling – Corporate supply chain optimization Plant Problem Size Contact Alkylation Ethylene 3,200 TPD 15,000 BPD 200 million lb/yr Units 14 76 ~200 Streams 35 110 ~4,000 Constraints Equality 761 1,579 ~400,000 Inequality 28 50 ~10,000 Variables Measured 43 125 ~300 Unmeasured 732 1,509 ~10,000 Parameters 11 64 ~100 Optimization Programming Languages • GAMS - General Algebraic Modeling System • LINDO - Widely used in business applications
    [Show full text]
  • The Generalized Simplex Method for Minimizing a Linear Form Under Linear Inequality Restraints
    Pacific Journal of Mathematics THE GENERALIZED SIMPLEX METHOD FOR MINIMIZING A LINEAR FORM UNDER LINEAR INEQUALITY RESTRAINTS GEORGE BERNARD DANTZIG,ALEXANDER ORDEN AND PHILIP WOLFE Vol. 5, No. 2 October 1955 THE GENERALIZED SIMPLEX METHOD FOR MINIMIZING A LINEAR FORM UNDER LINEAR INEQUALITY RESTRAINTS GEORGE B. DANTZIG, ALEX ORDEN, PHILIP WOLFE 1. Background and summary. The determination of "optimum" solutions of systems of linear inequalities is assuming increasing importance as a tool for mathematical analysis of certain problems in economics, logistics, and the theory of games [l;5] The solution of large systems is becoming more feasible with the advent of high-speed digital computers; however, as in the related problem of inversion of large matrices, there are difficulties which remain to be resolved connected with rank. This paper develops a theory for avoiding as- sumptions regarding rank of underlying matrices which has import in applica- tions where little or nothing is known about the rank of the linear inequality system under consideration. The simplex procedure is a finite iterative method which deals with problems involving linear inequalities in a manner closely analogous to the solution of linear equations or matrix inversion by Gaussian elimination. Like the latter it is useful in proving fundamental theorems on linear algebraic systems. For example, one form of the fundamental duality theorem associated with linear inequalities is easily shown as a direct consequence of solving the main prob- lem. Other forms can be obtained by trivial manipulations (for a fuller discus- sion of these interrelations, see [13]); in particular, the duality theorem [8; 10; 11; 12] leads directly to the Minmax theorem for zero-sum two-person games [id] and to a computational method (pointed out informally by Herman Rubin and demonstrated by Robert Dorfman [la]) which simultaneously yields optimal strategies for both players and also the value of the game.
    [Show full text]
  • A New Trust Region Method with Simple Model for Large-Scale Optimization ∗
    A New Trust Region Method with Simple Model for Large-Scale Optimization ∗ Qunyan Zhou,y Wenyu Sunz and Hongchao Zhangx Abstract In this paper a new trust region method with simple model for solv- ing large-scale unconstrained nonlinear optimization is proposed. By employing generalized weak quasi-Newton equations, we derive several schemes to construct variants of scalar matrices as the Hessian approx- imation used in the trust region subproblem. Under some reasonable conditions, global convergence of the proposed algorithm is established in the trust region framework. The numerical experiments on solving the test problems with dimensions from 50 to 20000 in the CUTEr li- brary are reported to show efficiency of the algorithm. Key words. unconstrained optimization, Barzilai-Borwein method, weak quasi-Newton equation, trust region method, global convergence AMS(2010) 65K05, 90C30 1. Introduction In this paper, we consider the unconstrained optimization problem min f(x); (1.1) x2Rn where f : Rn ! R is continuously differentiable with dimension n relatively large. Trust region methods are a class of powerful and robust globalization ∗This work was supported by National Natural Science Foundation of China (11171159, 11401308), the Natural Science Fund for Universities of Jiangsu Province (13KJB110007), the Fund of Jiangsu University of Technology (KYY13012). ySchool of Mathematics and Physics, Jiangsu University of Technology, Changzhou 213001, Jiangsu, China. Email: [email protected] zCorresponding author. School of Mathematical Sciences, Jiangsu Key Laboratory for NSLSCS, Nanjing Normal University, Nanjing 210023, China. Email: [email protected] xDepartment of Mathematics, Louisiana State University, USA. Email: [email protected] 1 methods for solving (1.1).
    [Show full text]
  • The Gradient Sampling Methodology
    The Gradient Sampling Methodology James V. Burke∗ Frank E. Curtisy Adrian S. Lewisz Michael L. Overtonx January 14, 2019 1 Introduction in particular, this leads to calling the negative gra- dient, namely, −∇f(x), the direction of steepest de- The principal methodology for minimizing a smooth scent for f at x. However, when f is not differentiable function is the steepest descent (gradient) method. near x, one finds that following the negative gradient One way to extend this methodology to the minimiza- direction might offer only a small amount of decrease tion of a nonsmooth function involves approximating in f; indeed, obtaining decrease from x along −∇f(x) subdifferentials through the random sampling of gra- may be possible only with a very small stepsize. The dients. This approach, known as gradient sampling GS methodology is based on the idea of stabilizing (GS), gained a solid theoretical foundation about a this definition of steepest descent by instead finding decade ago [BLO05, Kiw07], and has developed into a a direction to approximately solve comprehensive methodology for handling nonsmooth, potentially nonconvex functions in the context of op- min max gT d; (2) kdk2≤1 g2@¯ f(x) timization algorithms. In this article, we summarize the foundations of the GS methodology, provide an ¯ where @f(x) is the -subdifferential of f at x [Gol77]. overview of the enhancements and extensions to it To understand the context of this idea, recall that that have developed over the past decade, and high- the (Clarke) subdifferential of a locally Lipschitz f light some interesting open questions related to GS.
    [Show full text]
  • An Efficient Three-Term Iterative Method for Estimating Linear
    mathematics Article An Efficient Three-Term Iterative Method for Estimating Linear Approximation Models in Regression Analysis Siti Farhana Husin 1,*, Mustafa Mamat 1, Mohd Asrul Hery Ibrahim 2 and Mohd Rivaie 3 1 Faculty of Informatics and Computing, Universiti Sultan Zainal Abidin, Terengganu 21300, Malaysia; [email protected] 2 Faculty of Entrepreneurship and Business, Universiti Malaysia Kelantan, Kelantan 16100, Malaysia; [email protected] 3 Department of Computer Sciences and Mathematics, Universiti Teknologi Mara, Terengganu 54000, Malaysia; [email protected] * Correspondence: [email protected] Received: 15 April 2020; Accepted: 20 May 2020; Published: 15 June 2020 Abstract: This study employs exact line search iterative algorithms for solving large scale unconstrained optimization problems in which the direction is a three-term modification of iterative method with two different scaled parameters. The objective of this research is to identify the effectiveness of the new directions both theoretically and numerically. Sufficient descent property and global convergence analysis of the suggested methods are established. For numerical experiment purposes, the methods are compared with the previous well-known three-term iterative method and each method is evaluated over the same set of test problems with different initial points. Numerical results show that the performances of the proposed three-term methods are more efficient and superior to the existing method. These methods could also produce an approximate linear regression equation to solve the regression model. The findings of this study can help better understanding of the applicability of numerical algorithms that can be used in estimating the regression model. Keywords: steepest descent method; large-scale unconstrained optimization; regression model 1.
    [Show full text]
  • Constrained Multifidelity Optimization Using Model Calibration
    Struct Multidisc Optim (2012) 46:93–109 DOI 10.1007/s00158-011-0749-1 RESEARCH PAPER Constrained multifidelity optimization using model calibration Andrew March · Karen Willcox Received: 4 April 2011 / Revised: 22 November 2011 / Accepted: 28 November 2011 / Published online: 8 January 2012 c Springer-Verlag 2012 Abstract Multifidelity optimization approaches seek to derivative-free and sequential quadratic programming meth- bring higher-fidelity analyses earlier into the design process ods. The method uses approximately the same number of by using performance estimates from lower-fidelity models high-fidelity analyses as a multifidelity trust-region algo- to accelerate convergence towards the optimum of a high- rithm that estimates the high-fidelity gradient using finite fidelity design problem. Current multifidelity optimization differences. methods generally fall into two broad categories: provably convergent methods that use either the high-fidelity gradient Keywords Multifidelity · Derivative-free · Optimization · or a high-fidelity pattern-search, and heuristic model cali- Multidisciplinary · Aerodynamic design bration approaches, such as interpolating high-fidelity data or adding a Kriging error model to a lower-fidelity function. This paper presents a multifidelity optimization method that bridges these two ideas; our method iteratively calibrates Nomenclature lower-fidelity information to the high-fidelity function in order to find an optimum of the high-fidelity design prob- A Active and violated constraint Jacobian lem. The
    [Show full text]
  • Revised Simplex Algorithm - 1
    Lecture 9: Linear Programming: Revised Simplex and Interior Point Methods (a nice application of what we have learned so far) Prof. Krishna R. Pattipati Dept. of Electrical and Computer Engineering University of Connecticut Contact: [email protected] (860) 486-2890 ECE 6435 Fall 2008 Adv Numerical Methods in Sci Comp October 22, 2008 Copyright ©2004 by K. Pattipati Lecture Outline What is Linear Programming (LP)? Why do we need to solve Linear-Programming problems? • L1 and L∞ curve fitting (i.e., parameter estimation using 1-and ∞-norms) • Sample LP applications • Transportation Problems, Shortest Path Problems, Optimal Control, Diet Problem Methods for solving LP problems • Revised Simplex method • Ellipsoid method….not practical • Karmarkar’s projective scaling (interior point method) Implementation issues of the Least-Squares subproblem of Karmarkar’s method ….. More in Linear Programming and Network Flows course Comparison of Simplex and projective methods References 1. Dimitris Bertsimas and John N. Tsisiklis, Introduction to Linear Optimization, Athena Scientific, Belmont, MA, 1997. 2. I. Adler, M. G. C. Resende, G. Vega, and N. Karmarkar, “An Implementation of Karmarkar’s Algorithm for Linear Programming,” Mathematical Programming, Vol. 44, 1989, pp. 297-335. 3. I. Adler, N. Karmarkar, M. G. C. Resende, and G. Vega, “Data Structures and Programming Techniques for the Implementation of Karmarkar’s Algorithm,” ORSA Journal on Computing, Vol. 1, No. 2, 1989. 2 Copyright ©2004 by K. Pattipati What is Linear Programming? One of the most celebrated problems since 1951 • Major breakthroughs: • Dantzig: Simplex method (1947-1949) • Khachian: Ellipsoid method (1979) - Polynomial complexity, but not competitive with the Simplex → not practical.
    [Show full text]
  • A New Algorithm of Nonlinear Conjugate Gradient Method with Strong Convergence*
    Volume 27, N. 1, pp. 93–106, 2008 Copyright © 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam A new algorithm of nonlinear conjugate gradient method with strong convergence* ZHEN-JUN SHI1,2 and JINHUA GUO2 1College of Operations Research and Management, Qufu Normal University Rizhao, Sahndong 276826, P.R. China 2Department of Computer and Information Science, University of Michigan Dearborn, Michigan 48128-1491, USA E-mails: [email protected]; [email protected] / [email protected] Abstract. The nonlinear conjugate gradient method is a very useful technique for solving large scale minimization problems and has wide applications in many fields. In this paper, we present a new algorithm of nonlinear conjugate gradient method with strong convergence for unconstrained minimization problems. The new algorithm can generate an adequate trust region radius automatically at each iteration and has global convergence and linear convergence rate under some mild conditions. Numerical results show that the new algorithm is efficient in practical computation and superior to other similar methods in many situations. Mathematical subject classification: 90C30, 65K05, 49M37. Key words: unconstrained optimization, nonlinear conjugate gradient method, global conver- gence, linear convergence rate. 1 Introduction Consider an unconstrained minimization problem min f (x), x ∈ Rn, (1) where Rn is an n-dimensional Euclidean space and f : Rn −→ R is a continu- ously differentiable function. #724/07. Received: 10/V/07. Accepted: 24/IX/07. *The work was supported in part by NSF CNS-0521142, USA. 94 A NEW ALGORITHM OF NONLINEAR CONJUGATE GRADIENT METHOD When n is very large (for example, n > 106) the related problem is called large scale minimization problem.
    [Show full text]
  • Computing a Trust Region Step
    Distribution Category: Mathematics and Computers ANL--81-83 (UC-32) DE82 005656 ANIr81483 ARGONNE NATIONAL LABORATORY 9700 South Cass Avenue Argonne, Illinois 60439 COMPUTING A TRUST REGION STEP Jorge J. Mord and D. C. Sorensen Applied Mathematics Division DISCLAIMER - -. ".,.,... December 1981 Computing a Trust Region Step Jorge J. Mord and D. C. Sorensen Applied Mathematics Division Argonne National Laboratory Argonne Illinois 60439 1. Introduction. In an important class of minimization algorithms called "trust region methods" (see, for example, Sorensen [1981]), the calculation of the step between iterates requires the solution of a problem of the form (1.1) minij(w): 1|w |isAl where A is a positive parameter, I- II is the Euclidean norm in R", and (1.2) #(w) = g w + Mw7Bw, with g E R", and B E R a symmetric matrix. The quadratic function ' generally represents a local model to the objective function defined by interpolatory data at an iterate and thus it is important to be able to solve (1.1) for any symmetric matrix B; in particular, for a matrix B with negative eigenvalues. In trust region methods it is sometimes helpful to include a scaling matrix for the variables. In this case, problem (1.1) is replaced by (1.3) mint%(v):IIDu Is A where D E R""" is a nonsingular matrix. The change of variables Du = w shows that problem (1.3) is equivalent to (1.4) minif(w):1|w| Is Al where f(w) _%f(D-'w), and that the solutions of problems (1.3) and (1.4) are related by Dv = w.
    [Show full text]