Preconditioning Techniques for Large Linear Systems: a Survey
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
A Learning-Based Iterative Method for Solving Vehicle
Published as a conference paper at ICLR 2020 ALEARNING-BASED ITERATIVE METHOD FOR SOLVING VEHICLE ROUTING PROBLEMS Hao Lu ∗ Princeton University, Princeton, NJ 08540 [email protected] Xingwen Zhang ∗ & Shuang Yang Ant Financial Services Group, San Mateo, CA 94402 fxingwen.zhang,[email protected] ABSTRACT This paper is concerned with solving combinatorial optimization problems, in par- ticular, the capacitated vehicle routing problems (CVRP). Classical Operations Research (OR) algorithms such as LKH3 (Helsgaun, 2017) are inefficient and dif- ficult to scale to larger-size problems. Machine learning based approaches have recently shown to be promising, partly because of their efficiency (once trained, they can perform solving within minutes or even seconds). However, there is still a considerable gap between the quality of a machine learned solution and what OR methods can offer (e.g., on CVRP-100, the best result of learned solutions is between 16.10-16.80, significantly worse than LKH3’s 15.65). In this paper, we present “Learn to Improve” (L2I), the first learning based approach for CVRP that is efficient in solving speed and at the same time outperforms OR methods. Start- ing with a random initial solution, L2I learns to iteratively refine the solution with an improvement operator, selected by a reinforcement learning based controller. The improvement operator is selected from a pool of powerful operators that are customized for routing problems. By combining the strengths of the two worlds, our approach achieves the new state-of-the-art results on CVRP, e.g., an average cost of 15.57 on CVRP-100. 1 INTRODUCTION In this paper, we focus on an important class of combinatorial optimization, vehicle routing problems (VRP), which have a wide range of applications in logistics. -
Structured Eigenvalue Condition Numbers and Linearizations for Matrix Polynomials
¢¢¢¢¢¢¢¢¢¢¢ ¢¢¢¢¢¢¢¢¢¢¢ ¢¢¢¢¢¢¢¢¢¢¢ ¢¢¢¢¢¢¢¢¢¢¢ Eidgen¨ossische Ecole polytechnique f´ed´erale de Zurich ¢¢¢¢¢¢¢¢¢¢¢ ¢¢¢¢¢¢¢¢¢¢¢ ¢¢¢¢¢¢¢¢¢¢¢ ¢¢¢¢¢¢¢¢¢¢¢ Technische Hochschule Politecnico federale di Zurigo ¢¢¢¢¢¢¢¢¢¢¢ ¢¢¢¢¢¢¢¢¢¢¢ ¢¢¢¢¢¢¢¢¢¢¢ ¢¢¢¢¢¢¢¢¢¢¢ Zu¨rich Swiss Federal Institute of Technology Zurich Structured eigenvalue condition numbers and linearizations for matrix polynomials B. Adhikari∗, R. Alam† and D. Kressner Research Report No. 2009-01 January 2009 Seminar fu¨r Angewandte Mathematik Eidgen¨ossische Technische Hochschule CH-8092 Zu¨rich Switzerland ∗Department of Mathematics, Indian Institute of Technology Guwahati, India, E-mail: [email protected] †Department of Mathematics, Indian Institute of Technology Guwahati, India, E-mail: rafi[email protected], rafi[email protected], Fax: +91-361-2690762/2582649. Structured eigenvalue condition numbers and linearizations for matrix polynomials Bibhas Adhikari∗ Rafikul Alam† Daniel Kressner‡. Abstract. This work is concerned with eigenvalue problems for structured matrix polynomials, including complex symmetric, Hermitian, even, odd, palindromic, and anti-palindromic matrix poly- nomials. Most numerical approaches to solving such eigenvalue problems proceed by linearizing the matrix polynomial into a matrix pencil of larger size. Recently, linearizations have been classified for which the pencil reflects the structure of the original polynomial. A question of practical impor- tance is whether this process of linearization increases the sensitivity of the eigenvalue with respect to structured perturbations. For all structures under consideration, we show that this is not the case: there is always a linearization for which the structured condition number of an eigenvalue does not differ significantly. This implies, for example, that a structure-preserving algorithm applied to the linearization fully benefits from a potentially low structured eigenvalue condition number of the original matrix polynomial. Keywords. Eigenvalue problem, matrix polynomial, linearization, structured condition num- ber. -
Condition Number Bounds for Problems with Integer Coefficients*
Condition Number Bounds for Problems with Integer Coefficients* Gregorio Malajovich Departamento de Matem´atica Aplicada, Universidade Federal do Rio de Janeiro. Caixa Postal 68530, CEP 21945-970, Rio de Janeiro, RJ, Brasil. E-mail: [email protected] An explicit a priori bound for the condition number associated to each of the following problems is given: general linear equation solving, least squares, non-symmetric eigenvalue problems, solving univariate polynomials, and solv- ing systems of multivariate polynomials. It is assumed that the input has integer coefficients and is not on the degeneracy locus of the respective prob- lem (i.e., the condition number is finite). Our bounds are stated in terms of the dimension and of the bit-size of the input. In the same setting, bounds are given for the speed of convergence of the following iterative algorithms: QR iteration without shift for the symmetric eigenvalue problem, and Graeffe iteration for univariate polynomials. Key Words: condition number, height, complexity 1. INTRODUCTION In most of the numerical analysis literature, complexity and stability of numerical algorithms are usually estimated in terms of the problem instance dimension and of a \condition number". For instance, the complexity of solving an n n linear system Ax = b is usually estimated in terms of the dimension n×(when the input actually consists of n(n + 1) real numbers) and the condition number κ(A) (see section 2.1 for the definition of the condition number). There is a set of problems instances with κ(A) = , and in most cases it makes no sense to solve these instances. -
Overview of Iterative Linear System Solver Packages
Overview of Iterative Linear System Solver Packages Victor Eijkhout July, 1998 Abstract Description and comparison of several packages for the iterative solu- tion of linear systems of equations. 1 1 Intro duction There are several freely available packages for the iterative solution of linear systems of equations, typically derived from partial di erential equation prob- lems. In this rep ort I will give a brief description of a numberofpackages, and giveaninventory of their features and de ning characteristics. The most imp ortant features of the packages are which iterative metho ds and preconditioners supply; the most relevant de ning characteristics are the interface they present to the user's data structures, and their implementation language. 2 2 Discussion Iterative metho ds are sub ject to several design decisions that a ect ease of use of the software and the resulting p erformance. In this section I will give a global discussion of the issues involved, and how certain p oints are addressed in the packages under review. 2.1 Preconditioners A go o d preconditioner is necessary for the convergence of iterative metho ds as the problem to b e solved b ecomes more dicult. Go o d preconditioners are hard to design, and this esp ecially holds true in the case of parallel pro cessing. Here is a short inventory of the various kinds of preconditioners found in the packages reviewed. 2.1.1 Ab out incomplete factorisation preconditioners Incomplete factorisations are among the most successful preconditioners devel- op ed for single-pro cessor computers. Unfortunately, since they are implicit in nature, they cannot immediately b e used on parallel architectures. -
The Generalized Simplex Method for Minimizing a Linear Form Under Linear Inequality Restraints
Pacific Journal of Mathematics THE GENERALIZED SIMPLEX METHOD FOR MINIMIZING A LINEAR FORM UNDER LINEAR INEQUALITY RESTRAINTS GEORGE BERNARD DANTZIG,ALEXANDER ORDEN AND PHILIP WOLFE Vol. 5, No. 2 October 1955 THE GENERALIZED SIMPLEX METHOD FOR MINIMIZING A LINEAR FORM UNDER LINEAR INEQUALITY RESTRAINTS GEORGE B. DANTZIG, ALEX ORDEN, PHILIP WOLFE 1. Background and summary. The determination of "optimum" solutions of systems of linear inequalities is assuming increasing importance as a tool for mathematical analysis of certain problems in economics, logistics, and the theory of games [l;5] The solution of large systems is becoming more feasible with the advent of high-speed digital computers; however, as in the related problem of inversion of large matrices, there are difficulties which remain to be resolved connected with rank. This paper develops a theory for avoiding as- sumptions regarding rank of underlying matrices which has import in applica- tions where little or nothing is known about the rank of the linear inequality system under consideration. The simplex procedure is a finite iterative method which deals with problems involving linear inequalities in a manner closely analogous to the solution of linear equations or matrix inversion by Gaussian elimination. Like the latter it is useful in proving fundamental theorems on linear algebraic systems. For example, one form of the fundamental duality theorem associated with linear inequalities is easily shown as a direct consequence of solving the main prob- lem. Other forms can be obtained by trivial manipulations (for a fuller discus- sion of these interrelations, see [13]); in particular, the duality theorem [8; 10; 11; 12] leads directly to the Minmax theorem for zero-sum two-person games [id] and to a computational method (pointed out informally by Herman Rubin and demonstrated by Robert Dorfman [la]) which simultaneously yields optimal strategies for both players and also the value of the game. -
A Geometric Theory for Preconditioned Inverse Iteration. III: a Short and Sharp Convergence Estimate for Generalized Eigenvalue Problems
A geometric theory for preconditioned inverse iteration. III: A short and sharp convergence estimate for generalized eigenvalue problems. Andrew V. Knyazev Department of Mathematics, University of Colorado at Denver, P.O. Box 173364, Campus Box 170, Denver, CO 80217-3364 1 Klaus Neymeyr Mathematisches Institut der Universit¨atT¨ubingen,Auf der Morgenstelle 10, 72076 T¨ubingen,Germany 2 Abstract In two previous papers by Neymeyr: A geometric theory for preconditioned inverse iteration I: Extrema of the Rayleigh quotient, LAA 322: (1-3), 61-85, 2001, and A geometric theory for preconditioned inverse iteration II: Convergence estimates, LAA 322: (1-3), 87-104, 2001, a sharp, but cumbersome, convergence rate estimate was proved for a simple preconditioned eigensolver, which computes the smallest eigenvalue together with the corresponding eigenvector of a symmetric positive def- inite matrix, using a preconditioned gradient minimization of the Rayleigh quotient. In the present paper, we discover and prove a much shorter and more elegant, but still sharp in decisive quantities, convergence rate estimate of the same method that also holds for a generalized symmetric definite eigenvalue problem. The new estimate is simple enough to stimulate a search for a more straightforward proof technique that could be helpful to investigate such practically important method as the locally optimal block preconditioned conjugate gradient eigensolver. We demon- strate practical effectiveness of the latter for a model problem, where it compares favorably with two -
16 Preconditioning
16 Preconditioning The general idea underlying any preconditioning procedure for iterative solvers is to modify the (ill-conditioned) system Ax = b in such a way that we obtain an equivalent system Aˆxˆ = bˆ for which the iterative method converges faster. A standard approach is to use a nonsingular matrix M, and rewrite the system as M −1Ax = M −1b. The preconditioner M needs to be chosen such that the matrix Aˆ = M −1A is better conditioned for the conjugate gradient method, or has better clustered eigenvalues for the GMRES method. 16.1 Preconditioned Conjugate Gradients We mentioned earlier that the number of iterations required for the conjugate gradient algorithm to converge is proportional to pκ(A). Thus, for poorly conditioned matrices, convergence will be very slow. Thus, clearly we will want to choose M such that κ(Aˆ) < κ(A). This should result in faster convergence. How do we find Aˆ, xˆ, and bˆ? In order to ensure symmetry and positive definiteness of Aˆ we let M −1 = LLT (44) with a nonsingular m × m matrix L. Then we can rewrite Ax = b ⇐⇒ M −1Ax = M −1b ⇐⇒ LT Ax = LT b ⇐⇒ LT AL L−1x = LT b . | {z } | {z } |{z} =Aˆ =xˆ =bˆ The symmetric positive definite matrix M is called splitting matrix or preconditioner, and it can easily be verified that Aˆ is symmetric positive definite, also. One could now formally write down the standard CG algorithm with the new “hat- ted” quantities. However, the algorithm is more efficient if the preconditioning is incorporated directly into the iteration. To see what this means we need to examine every single line in the CG algorithm. -
An Efficient Three-Term Iterative Method for Estimating Linear
mathematics Article An Efficient Three-Term Iterative Method for Estimating Linear Approximation Models in Regression Analysis Siti Farhana Husin 1,*, Mustafa Mamat 1, Mohd Asrul Hery Ibrahim 2 and Mohd Rivaie 3 1 Faculty of Informatics and Computing, Universiti Sultan Zainal Abidin, Terengganu 21300, Malaysia; [email protected] 2 Faculty of Entrepreneurship and Business, Universiti Malaysia Kelantan, Kelantan 16100, Malaysia; [email protected] 3 Department of Computer Sciences and Mathematics, Universiti Teknologi Mara, Terengganu 54000, Malaysia; [email protected] * Correspondence: [email protected] Received: 15 April 2020; Accepted: 20 May 2020; Published: 15 June 2020 Abstract: This study employs exact line search iterative algorithms for solving large scale unconstrained optimization problems in which the direction is a three-term modification of iterative method with two different scaled parameters. The objective of this research is to identify the effectiveness of the new directions both theoretically and numerically. Sufficient descent property and global convergence analysis of the suggested methods are established. For numerical experiment purposes, the methods are compared with the previous well-known three-term iterative method and each method is evaluated over the same set of test problems with different initial points. Numerical results show that the performances of the proposed three-term methods are more efficient and superior to the existing method. These methods could also produce an approximate linear regression equation to solve the regression model. The findings of this study can help better understanding of the applicability of numerical algorithms that can be used in estimating the regression model. Keywords: steepest descent method; large-scale unconstrained optimization; regression model 1. -
Revised Simplex Algorithm - 1
Lecture 9: Linear Programming: Revised Simplex and Interior Point Methods (a nice application of what we have learned so far) Prof. Krishna R. Pattipati Dept. of Electrical and Computer Engineering University of Connecticut Contact: [email protected] (860) 486-2890 ECE 6435 Fall 2008 Adv Numerical Methods in Sci Comp October 22, 2008 Copyright ©2004 by K. Pattipati Lecture Outline What is Linear Programming (LP)? Why do we need to solve Linear-Programming problems? • L1 and L∞ curve fitting (i.e., parameter estimation using 1-and ∞-norms) • Sample LP applications • Transportation Problems, Shortest Path Problems, Optimal Control, Diet Problem Methods for solving LP problems • Revised Simplex method • Ellipsoid method….not practical • Karmarkar’s projective scaling (interior point method) Implementation issues of the Least-Squares subproblem of Karmarkar’s method ….. More in Linear Programming and Network Flows course Comparison of Simplex and projective methods References 1. Dimitris Bertsimas and John N. Tsisiklis, Introduction to Linear Optimization, Athena Scientific, Belmont, MA, 1997. 2. I. Adler, M. G. C. Resende, G. Vega, and N. Karmarkar, “An Implementation of Karmarkar’s Algorithm for Linear Programming,” Mathematical Programming, Vol. 44, 1989, pp. 297-335. 3. I. Adler, N. Karmarkar, M. G. C. Resende, and G. Vega, “Data Structures and Programming Techniques for the Implementation of Karmarkar’s Algorithm,” ORSA Journal on Computing, Vol. 1, No. 2, 1989. 2 Copyright ©2004 by K. Pattipati What is Linear Programming? One of the most celebrated problems since 1951 • Major breakthroughs: • Dantzig: Simplex method (1947-1949) • Khachian: Ellipsoid method (1979) - Polynomial complexity, but not competitive with the Simplex → not practical. -
The Eigenvalue Problem: Perturbation Theory
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just as the problem of solving a system of linear equations Ax = b can be sensitive to pertur- bations in the data, the problem of computing the eigenvalues of a matrix can also be sensitive to perturbations in the matrix. We will now obtain some results concerning the extent of this sensitivity. Suppose that A is obtained by perturbing a diagonal matrix D by a matrix F whose diagonal entries are zero; that is, A = D + F . If is an eigenvalue of A with corresponding eigenvector x, then we have (D − I)x + F x = 0: If is not equal to any of the diagonal entries of A, then D − I is nonsingular and we have x = −(D − I)−1F x: Taking 1-norms of both sides, we obtain −1 −1 kxk1 = k(D − I) F xk1 ≤ k(D − I) F k1kxk1; which yields n −1 X jfijj k(D − I) F k1 = max ≥ 1: 1≤i≤n jdii − j j=1;j6=i It follows that for some i, 1 ≤ i ≤ n, satisfies n X jdii − j ≤ jfijj: j=1;j6=i That is, lies within one of the Gerschgorin circles in the complex plane, that has center aii and radius n X ri = jaijj: j=1;j6=i 1 This is result is known as the Gerschgorin Circle Theorem. Example The eigenvalues of the matrix 2 −5 −1 1 3 A = 4 −2 2 −1 5 1 −3 7 are (A) = f6:4971; 2:7930; −5:2902g: The Gerschgorin disks are D1 = fz 2 C j jz − 7j ≤ 4g;D2 = fz 2 C j jz − 2j ≤ 3g;D3 = fz 2 C j jz + 5j ≤ 2g: We see that each disk contains one eigenvalue. -
Numerical Linear Algebra Solving Ax = B , an Overview Introduction
Solving Ax = b, an overview Numerical Linear Algebra ∗ A = A no Good precond. yes flex. precond. yes GCR Improving iterative solvers: ⇒ ⇒ ⇒ yes no no ⇓ ⇓ ⇓ preconditioning, deflation, numerical yes GMRES A > 0 CG software and parallelisation ⇒ ⇓ no ⇓ ⇓ ill cond. yes SYMMLQ str indef no large im eig no Bi-CGSTAB ⇒ ⇒ ⇒ no yes yes ⇓ ⇓ ⇓ MINRES IDR no large im eig BiCGstab(ℓ) ⇐ yes Gerard Sleijpen and Martin van Gijzen a good precond itioner is available ⇓ the precond itioner is flex ible IDRstab November 29, 2017 A + A∗ is strongly indefinite A 1 has large imaginary eigenvalues November 29, 2017 2 National Master Course National Master Course Delft University of Technology Introduction Program We already saw that the performance of iterative methods can Preconditioning be improved by applying a preconditioner. Preconditioners (and • Diagonal scaling, Gauss-Seidel, SOR and SSOR deflation techniques) are a key to successful iterative methods. • Incomplete Choleski and Incomplete LU In general they are very problem dependent. • Deflation Today we will discuss some standard preconditioners and we will • Numerical software explain the idea behind deflation. • Parallelisation We will also discuss some efforts to standardise numerical • software. Shared memory versus distributed memory • Domain decomposition Finally we will discuss how to perform scientific computations on • a parallel computer. November 29, 2017 3 November 29, 2017 4 National Master Course National Master Course Preconditioning Why preconditioners? A preconditioned iterative solver solves the system − − M 1Ax = M 1b. 1 The matrix M is called the preconditioner. 0.8 The preconditioner should satisfy certain requirements: 0.6 Convergence should be (much) faster (in time) for the 0.4 • preconditioned system than for the original system. -
A Non-Iterative Method for Optimizing Linear Functions in Convex Linearly Constrained Spaces
A non-iterative method for optimizing linear functions in convex linearly constrained spaces GERARDO FEBRES*,** * Departamento de Procesos y Sistemas, Universidad Simón Bolívar, Venezuela. ** Laboratorio de Evolución, Universidad Simón Bolívar, Venezuela. Received… This document introduces a new strategy to solve linear optimization problems. The strategy is based on the bounding condition each constraint produces on each one of the problem’s dimension. The solution of a linear optimization problem is located at the intersection of the constraints defining the extreme vertex. By identifying the constraints that limit the growth of the objective function value, we formulate linear equations system leading to the optimization problem’s solution. The most complex operation of the algorithm is the inversion of a matrix sized by the number of dimensions of the problem. Therefore, the algorithm’s complexity is comparable to the corresponding to the classical Simplex method and the more recently developed Linear Programming algorithms. However, the algorithm offers the advantage of being non-iterative. Key Words: linear optimization; optimization algorithm; constraint redundancy; extreme points. 1. INTRODUCTION Since the apparition of Dantzig’s Simplex Algorithm in 1947, linear optimization has become a widely used method to model multidimensional decision problems. Appearing even before the establishment of digital computers, the first version of the Simplex algorithm has remained for long time as the most effective way to solve large scale linear problems; an interesting fact perhaps explained by the Dantzig [1] in a report written for Stanford University in 1987, where he indicated that even though most large scale problems are formulated with sparse matrices, the inverse of these matrixes are generally dense [1], thus producing an important burden to record and track these matrices as the algorithm advances toward the problem solution.