A Power Method for Computing Square Roots of Complex Matrices

Total Page:16

File Type:pdf, Size:1020Kb

A Power Method for Computing Square Roots of Complex Matrices JOURNAL OF MATHEMATICAL ANALYSIS AND APPLICATIONS 213, 393]405Ž. 1997 ARTICLE NO. AY975517 A Power Method for Computing Square Roots of Complex Matrices Mohammed A. Hasan Department of Electrical Engineering, Colorado State Uni¨ersity, Fort Collins, Colorado 80523 View metadata, citation and similar papers at core.ac.uk brought to you by CORE Submitted by Harlan W. Stech provided by Elsevier - Publisher Connector Received August 22, 1995 In this paper higher order convergent methods for computing square roots of nonsingular complex matrices are derived. These methods are globally convergent and are based on eigenvalue shifting and powering. Specifically, it is shown for each positive integer r G 2, a convergent method of order r can be developed. These algorithms can be used to compute square roots of general nonsingular complex matrices such as computing square roots of matrices with negative eigenvalues. Q 1997 Academic Press 1. INTRODUCTION m=m A square root of a complex matrix A g C is defined to be any m=m 2 matrix B g C such that B s A, where C is the field of complex numbers. If all eigenvalues of an m = m matrix A are distinct, then the 2 m matrix equation X s A generally has exactly 2 solutions. This follows from the fact that A is diagonalizable, i.e., there exists a similarity matrix y1 Ž. Usuch that A s UDU , where D s diag l1,...,lm and thus B s y1''ŽŽ .ii1 Ž .m . UDU, where D s diag y1 ''l1,..., y1 lmk, and i s 0or1 for k s 1, 2, . , m. However, if A has multiple eigenvalues, the number of m solutions will be different from 2 as shown next. Let m s 2, then without loss of generality, we can assume that A l220or A l1. s 0l22s 0l 2 cosŽ.u sin Ž.u Assume that A l I, then the family l 0 F u - 2p forms an s ½5sinŽ.u ycos Ž.u infinite set of square roots of A. On the other hand, if A l2 1 , then s 0 l2 A has only two square roots given by"l "1r2 l provided that l / 0. 0 "l Unlike the square roots of complex numbers, square roots of complex 393 0022-247Xr97 $25.00 Copyright Q 1997 by Academic Press All rights of reproduction in any form reserved. 394 MOHAMMED A. HASAN matrices may not exist. For example, when l s 0 in the last matrix no square root exists. From this observation, it is obvious that for 2 = 2 2 matrices, the equation B s A / 0 has a solution if and only if A has a nonzero eigenvalue. 2 To understand the structure of solutions of the equation B s A for bb m 2, let B 11 12 , and A aa11 12 such that B 2 A. Then we s s bb21 22 s aa21 22 s have the following four equations bbi11jiqbb22jijsa for i, j s 1, 2 which are equivalent to F s 0, where bb bb a ¡11 11q 12 21y 11 bb11 12qbb 12 22ya 12 FbŽ.11,b 12,b 21,b 22 s~ Ž1. bb21 11qbb 22 21ya 21 ¢bb21 12qbb 22 22ya 22 . The Jacobian of this system can be shown to be 2bb11 21 b12 0 ­ FbŽ.11,b 12,b 21,b 22 bb12 11qb 220 b 12 J ss . ­Ž.b11,b 12,b 21,b 22 b21 0b11qbb 22 21 0 bb21 12 2b22 Ž.2Ž. It can be verified that <<J s 4 b11q bbb 22 11 22ybb 12 21 s 4 Trace2Ž.BB<<. Here the notation <<J denotes the determinant of J, and Ž. Ým << Trace B s is1 bii. Since A is nonsingular, it follows that B / 0 and therefore J is nonsingular if and only if b11q b 22 / 0. Now assume that A is nonsingular and let bb00 B 11 12 0 s 00 bb21 22 2 0 0 be a solution of the equation B s A such that b11q b 22 / 0. Since J is 0000 nonsingular at Žb11, b 12, b 21, b 22., it follows from the implicit function theorem that B0 is the only solution. From the eigendecomposition of A indicated before, one can see that there are at least four square roots of A. The implicit function theorem guarantees exactly four square roots with nonzero traces. These square roots of A which have nonzero traces are referred to as functions of A wx1 . Essentially, B is a function of A if B can be expressed as a polynomial in A. Ž. 2 Now if b11q b 22 s 0, then it follows from 1 that a11q a 22 s l , 2 a12s a 21 s 0, i.e., A is diagonal of the form l I. In this case the equation SQUARE ROOTS OF COMPLEX MATRICES 395 2 B s A has a two-dimensional family of solutions given by 2 "'l y rs r r, s g C. ½5s.'l2 rs y Note that when r s s s l sinŽ.u , we get the one-parameter family l cosŽ.u sin Ž.u 0 u - 2p which is described before. ½5sinŽ.u ycos Ž.u F The following result provides conditions on the eigenstructure of A which ensure the existence of square roots which are functions of A. PROPOSITION 11.wx Let A be nonsingular and its p elementary di¨isors be coprime, that is, each eigen¨alue appears in only one Jordan block. Then A has precisely 2 p square roots, each of which is a function of A. Several computational methods of square roots of complex matrices have been reported in the literature. Inwx 2 , the Newton-Raphson method was used for computing the principal square root of a complex matrix. An accelerated algorithm for computing the positive definite square root of a positive definite matrix was presented inwx 3 . A matrix continued fraction method was presented inwx 4 . The matrix sign algorithm was developed in wx5 . A Schur method for computing square roots was developed in wx 6 . Fast stable methods for computing square roots were also presented inwx 7, 8 . It is noted in almost all of the above methods either a linear or quadratic convergence can be obtained. In this paper, higher order conver- gent methods of order r G 2 will be derived. The essence of these methods is a process whereby a sequence of matrices which in the limit converges to a square root of A is generated. This process involves creating gaps between the magnitudes of eigenvalues of different square roots of A so that for sufficiently high powers the eigenvalues will become decoupled. This is similar in principle to well-known methods such as those of Graeffe, Bernoulli, and the qd algorithm for solving polynomial equations in that these methods are based on eigenvalue powering. For a survey of some of these methods the reader is referred towx 9, 10 and the references therein. Let S be a set of commutative and thus simultaneously diagonalizable matrices. In the sequel, the notation liŽ.X denotes the ith eigenvalue of the square matrix X g S relative to a fixed similarity matrix which diagonalizes the set S. The notation s Ž.A denotes the set of eigenvalues of A. The symbol R is used to denote the set of real numbers and 55A denotes any vector norm of the matrix A. 396 MOHAMMED A. HASAN 2. DERIVATION OF THE MAIN RESULTS In the next theorem we will generate a sequence which converges to a square root of a square matrix. m=m THEOREM 2. Let A g C be a nonsingular matrix. Let r be a positi¨e integer such that r G 2 and define Akk and B recursi¨ely as follow. Let r r A ABAry2l2ll,2Ž. kq1sÝž/2l kk ls0 and r r B ABAry2 ly12lq1l.3Ž. kq1sÝž/2l1kk ls0 q Then there exists an a g C such that Bk is nonsingular for all sufficiently large y1 k. Set Xkkks B A , then the initial guess A0s aIm and B0s Im the sequence Xk con¨erges to a square root W of A. Moreo¨er, r y1 r Xkq1" W s BBXkq1kkŽ."W,4 Ž. i.e., if the sequence Xk con¨erges, it is rth order con¨ergent to W. Addition- ally, lim Ay1 Ar I and lim By1 B r I. k ª` kq1 kks ª`kq1ks 2 Proof. Let W be any square root of A, i.e., W s A and show by induction that r k Akk" BWsŽ.aI " W .5Ž. ClearlyŽ. 5 holds for k s 0. Assume thatŽ. 5 holds for the positive integer k. Then r r kq 1 r r Ž.ŽaI " W A " BW . ABAry2 l 2 ll s kks Ýž/2l kk ls0 r r " ABAWry2ly12lq1l Ýž/2l1kk ls0 q sAkq1"BWkq1 , where the last equality follows fromŽ. 2 and Ž. 3 . Hence Ž. 5 is true for the integer k q 1. This shows thatŽ. 5 is true for each nonnegative integer k. SQUARE ROOTS OF COMPLEX MATRICES 397 The nonsingularity of A implies that there exists an a g C such that Ž.Ž. Ž. Ž < ljjaI q W <<) l aI y W <, for j s 1,...,m. From 5 we have aI q .rk Ž.rk WsAkkqBW and aI y W s Akky BW. Solving the last two equations for Akkand B yields 1 rrkk AksÄ4Ž.Ž.aI q W q aI y W ,6 Ž.
Recommended publications
  • Newton's Method for the Matrix Square Root*
    MATHEMATICS OF COMPUTATION VOLUME 46, NUMBER 174 APRIL 1986, PAGES 537-549 Newton's Method for the Matrix Square Root* By Nicholas J. Higham Abstract. One approach to computing a square root of a matrix A is to apply Newton's method to the quadratic matrix equation F( X) = X2 - A =0. Two widely-quoted matrix square root iterations obtained by rewriting this Newton iteration are shown to have excellent mathematical convergence properties. However, by means of a perturbation analysis and supportive numerical examples, it is shown that these simplified iterations are numerically unstable. A further variant of Newton's method for the matrix square root, recently proposed in the literature, is shown to be, for practical purposes, numerically stable. 1. Introduction. A square root of an n X n matrix A with complex elements, A e C"x", is a solution X e C"*" of the quadratic matrix equation (1.1) F(X) = X2-A=0. A natural approach to computing a square root of A is to apply Newton's method to (1.1). For a general function G: CXn -* Cx", Newton's method for the solution of G(X) = 0 is specified by an initial approximation X0 and the recurrence (see [14, p. 140], for example) (1.2) Xk+l = Xk-G'{XkylG{Xk), fc = 0,1,2,..., where G' denotes the Fréchet derivative of G. Identifying F(X+ H) = X2 - A +(XH + HX) + H2 with the Taylor series for F we see that F'(X) is a linear operator, F'(X): Cx" ^ C"x", defined by F'(X)H= XH+ HX.
    [Show full text]
  • Sensitivity and Stability Analysis of Nonlinear Kalman Filters with Application to Aircraft Attitude Estimation
    Graduate Theses, Dissertations, and Problem Reports 2013 Sensitivity and stability analysis of nonlinear Kalman filters with application to aircraft attitude estimation Matthew Brandon Rhudy West Virginia University Follow this and additional works at: https://researchrepository.wvu.edu/etd Recommended Citation Rhudy, Matthew Brandon, "Sensitivity and stability analysis of nonlinear Kalman filters with application ot aircraft attitude estimation" (2013). Graduate Theses, Dissertations, and Problem Reports. 3659. https://researchrepository.wvu.edu/etd/3659 This Dissertation is protected by copyright and/or related rights. It has been brought to you by the The Research Repository @ WVU with permission from the rights-holder(s). You are free to use this Dissertation in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you must obtain permission from the rights-holder(s) directly, unless additional rights are indicated by a Creative Commons license in the record and/ or on the work itself. This Dissertation has been accepted for inclusion in WVU Graduate Theses, Dissertations, and Problem Reports collection by an authorized administrator of The Research Repository @ WVU. For more information, please contact [email protected]. SENSITIVITY AND STABILITY ANALYSIS OF NONLINEAR KALMAN FILTERS WITH APPLICATION TO AIRCRAFT ATTITUDE ESTIMATION by Matthew Brandon Rhudy Dissertation submitted to the Benjamin M. Statler College of Engineering and Mineral Resources at West Virginia University in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Aerospace Engineering Approved by Dr. Yu Gu, Committee Chairperson Dr. John Christian Dr. Gary Morris Dr. Marcello Napolitano Dr. Powsiri Klinkhachorn Department of Mechanical and Aerospace Engineering Morgantown, West Virginia 2013 Keywords: Attitude Estimation, Extended Kalman Filter, GPS/INS Sensor Fusion, Stochastic Stability Copyright 2013, Matthew B.
    [Show full text]
  • Matlib: Matrix Functions for Teaching and Learning Linear Algebra and Multivariate Statistics
    Package ‘matlib’ August 21, 2021 Type Package Title Matrix Functions for Teaching and Learning Linear Algebra and Multivariate Statistics Version 0.9.5 Date 2021-08-10 Maintainer Michael Friendly <[email protected]> Description A collection of matrix functions for teaching and learning matrix linear algebra as used in multivariate statistical methods. These functions are mainly for tutorial purposes in learning matrix algebra ideas using R. In some cases, functions are provided for concepts available elsewhere in R, but where the function call or name is not obvious. In other cases, functions are provided to show or demonstrate an algorithm. In addition, a collection of functions are provided for drawing vector diagrams in 2D and 3D. License GPL (>= 2) Language en-US URL https://github.com/friendly/matlib BugReports https://github.com/friendly/matlib/issues LazyData TRUE Suggests knitr, rglwidget, rmarkdown, carData, webshot2, markdown Additional_repositories https://dmurdoch.github.io/drat Imports xtable, MASS, rgl, car, methods VignetteBuilder knitr RoxygenNote 7.1.1 Encoding UTF-8 NeedsCompilation no Author Michael Friendly [aut, cre] (<https://orcid.org/0000-0002-3237-0941>), John Fox [aut], Phil Chalmers [aut], Georges Monette [ctb], Gaston Sanchez [ctb] 1 2 R topics documented: Repository CRAN Date/Publication 2021-08-21 15:40:02 UTC R topics documented: adjoint . .3 angle . .4 arc..............................................5 arrows3d . .6 buildTmat . .8 cholesky . .9 circle3d . 10 class . 11 cofactor . 11 cone3d . 12 corner . 13 Det.............................................. 14 echelon . 15 Eigen . 16 gaussianElimination . 17 Ginv............................................. 18 GramSchmidt . 20 gsorth . 21 Inverse............................................ 22 J............................................... 23 len.............................................. 23 LU.............................................. 24 matlib . 25 matrix2latex . 27 minor . 27 MoorePenrose .
    [Show full text]
  • Notes on Linear Algebra and Matrix Analysis
    Notes on Linear Algebra and Matrix Analysis Maxim Neumann May 2006, Version 0.1.1 1 Matrix Basics Literature to this topic: [1–4]. x†y ⇐⇒ < y,x >: standard inner product. x†x = 1 : x is normalized x†y = 0 : x,y are orthogonal x†y = 0, x†x = 1, y†y = 1 : x,y are orthonormal Ax = y is uniquely solvable if A is linear independent (nonsingular). Majorization: Arrange b and a in increasing order (bm,am) then: k k b majorizes a ⇐⇒ ∑ bmi ≥ ∑ ami ∀ k ∈ [1,...,n] (1) i=1 i=1 n n The collection of all vectors b ∈ R that majorize a given vector a ∈ R may be obtained by forming the convex hull of n! vectors, which are computed by permuting the n components of a. Direct sum of matrices A ∈ Mn1,B ∈ Mn2: A 0 A ⊕ B = ∈ M (2) 0 B n1+n2 [A,B] = traceAB†: matrix inner product. 1.1 Trace n traceA = ∑λi (3) i trace(A + B) = traceA + traceB (4) traceAB = traceBA (5) 1.2 Determinants The determinant det(A) expresses the volume of a matrix A. A is singular. Linear equation is not solvable. det(A) = 0 ⇐⇒ −1 (6) A does not exists vectors in A are linear dependent det(A) 6= 0 ⇐⇒ A is regular/nonsingular. Ai j ∈ R → det(A) ∈ R Ai j ∈ C → det(A) ∈ C If A is a square matrix(An×n) and has the eigenvalues λi, then det(A) = ∏λi detAT = detA (7) detA† = detA (8) detAB = detA detB (9) Elementary operations on matrix and determinant: Interchange of two rows : detA ∗ = −1 Multiplication of a row by a nonzero scalar c : detA ∗ = c Addition of a scalar multiple of one row to another row : detA = detA 1 2 EIGENVALUES, EIGENVECTORS, AND SIMILARITY 2 a b = ad − bc (10) c d 2 Eigenvalues, Eigenvectors, and Similarity σ(An×n) = {λ1,...,λn} is the set of eigenvalues of A, also called the spectrum of A.
    [Show full text]
  • Functions Preserving Matrix Groups and Iterations for the Matrix Square Root∗
    FUNCTIONS PRESERVING MATRIX GROUPS AND ITERATIONS FOR THE MATRIX SQUARE ROOT∗ NICHOLAS J. HIGHAM† , D. STEVEN MACKEY‡ , NILOUFER MACKEY§ , AND FRANC¸OISE TISSEUR¶ Abstract. For which functions f does A ∈ G ⇒ f(A) ∈ G when G is the matrix automorphism group associated with a bilinear or sesquilinear form? For example, if A is symplectic when is f(A) symplectic? We show that group structure is preserved precisely when f(A−1)= f(A)−1 for bilinear forms and when f(A−∗) = f(A)−∗ for sesquilinear forms. Meromorphic functions that satisfy each of these conditions are characterized. Related to structure preservation is the condition f(A)= f(A), and analytic functions and rational functions satisfying this condition are also characterized. These results enable us to characterize all meromorphic functions that map every G into itself as the ratio of a polynomial and its “reversal”, up to a monomial factor and conjugation. The principal square root is an important example of a function that preserves every automor- phism group G. By exploiting the matrix sign function, a new family of coupled iterations for the matrix square root is derived. Some of these iterations preserve every G; all of them are shown, via a novel Fr´echet derivative-based analysis, to be numerically stable. A rewritten form of Newton’s method for the square root of A ∈ G is also derived. Unlike the original method, this new form has good numerical stability properties, and we argue that it is the iterative method of choice for computing A1/2 when A ∈ G. Our tools include a formula for the sign of a certain block 2 × 2 matrix, the generalized polar decomposition along with a wide class of iterations for computing it, and a connection between the generalized polar decomposition of I + A and the square root of A ∈ G.
    [Show full text]
  • Computing Real Square Roots of a Real Matrix* LINEAR ALGEBRA
    Computing Real Square Roots of a Real Matrix* Nicholas J. Higham Department of Mathematics University of Manchester Manchester Ml3 9PL, England In memory of James H. WiIkinson Submitted by Hans Schneider ABSTRACT Bjiirck and Hammarling [l] describe a fast, stable Schur method for computing a square root X of a matrix A (X2 = A).We present an extension of their method which enables real arithmetic to be used throughout when computing a real square root of a real matrix. For a nonsingular real matrix A conditions are given for the existence of a real square root, and for the existence of a real square root which is a polynomial in A; thenumber of square roots of the latter type is determined. The conditioning of matrix square roots is investigated, and an algorithm is given for the computation of a well-conditioned square root. 1. INTRODUCTION Given a matrix A, a matrix X for which X2 = A is called a square root of A. Several authors have considered the computation of matrix square roots [3, 4, 9, 10, 15, 161. A particularly attractive method which utilizes the Schur decomposition is described by Bjiirck and Hammarling [l]; in general it requires complex arithmetic. Our main purpose is to show how the method can be extended so as to compute a real square root of a real matrix, if one exists, in real arithmetic. The theory behind the existence of matrix square roots is nontrivial, as can be seen by noting that while the n x n identity matrix has infinitely many square roots for n > 2 (any involutary matrix such as a Householder transformation is a square root), a nonsingular Jordan block has precisely two square roots (this is proved in Corollary 1).
    [Show full text]
  • The Square Root Function of a Matrix
    CORE Metadata, citation and similar papers at core.ac.uk Provided by ScholarWorks @ Georgia State University Georgia State University ScholarWorks @ Georgia State University Mathematics Theses Department of Mathematics and Statistics 4-24-2007 The quaS re Root Function of a Matrix Crystal Monterz Gordon Follow this and additional works at: https://scholarworks.gsu.edu/math_theses Part of the Mathematics Commons Recommended Citation Gordon, Crystal Monterz, "The quaS re Root Function of a Matrix." Thesis, Georgia State University, 2007. https://scholarworks.gsu.edu/math_theses/24 This Thesis is brought to you for free and open access by the Department of Mathematics and Statistics at ScholarWorks @ Georgia State University. It has been accepted for inclusion in Mathematics Theses by an authorized administrator of ScholarWorks @ Georgia State University. For more information, please contact [email protected]. THE SQUARE ROOT FUNCTION OF A MATRIX by Crystal Monterz Gordon Under the Direction of Marina Arav and Frank Hall ABSTRACT Having origins in the increasingly popular Matrix Theory, the square root func- tion of a matrix has received notable attention in recent years. In this thesis, we discuss some of the more common matrix functions and their general properties, but we specifically explore the square root function of a matrix and the most effi- cient method (Schur decomposition) of computing it. Calculating the square root ofa2×2 matrix by the Cayley-Hamilton Theorem is highlighted, along with square roots of positive semidefinite matrices
    [Show full text]
  • Logarithms and Square Roots of Real Matrices Existence, Uniqueness, and Applications in Medical Imaging
    Logarithms and Square Roots of Real Matrices Existence, Uniqueness, and Applications in Medical Imaging Jean Gallier Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104, USA [email protected] September 2, 2019 Abstract. The need for computing logarithms or square roots of real matrices arises in a number of applied problems. A significant class of problems comes from medical imag- ing. One of these problems is to interpolate and to perform statistics on data represented by certain kinds of matrices (such as symmetric positive definite matrices in DTI). Another important and difficult problem is the registration of medical images. For both of these prob- lems, the ability to compute logarithms of real matrices turns out to be crucial. However, not all real matrices have a real logarithm and thus, it is important to have sufficient conditions for the existence (and possibly the uniqueness) of a real logarithm for a real matrix. Such conditions (involving the eigenvalues of a matrix) are known, both for the logarithm and the square root. As far as I know, with the exception of Higham's recent book [18], proofs of the results involving these conditions are scattered in the literature and it is not easy to locate them. Moreover, Higham's excellent book assumes a certain level of background in linear algebra that readers interested in applications to medical imaging may not possess so we feel that a more elementary presentation might be a valuable supplement to Higham [18]. In this paper, I present a unified exposition of these results, including a proof of the existence of the Real Jordan Form, and give more direct proofs of some of these results using the Real Jordan Form.
    [Show full text]
  • CHUNG-ANG UNIVERSITY Solutions to Problem Set #2 Answers To
    CHUNG-ANG UNIVERSITY Linear Algebra Spring 2014 Solutions to Problem Set #2 Answers to Practice Problems Problem 2.1 Let A, B, C, D and E be matrices of the following sizes; A B C D E (3 × 1) (3 × 6) (6 × 2) (2 × 6) (1 × 3) For each of the following, determine whether or not the given expression is defined. In other words, are the matrices of the correct size so that the given expression is a valid one. For those that are defined, determine the size of the resulting matrix. (a) BT (A + ET ) (b) (CT + D)BT (c) (BDT )CT Answer (a) Expression is defined, and is equal to a 6 × 1 vector. (b) Expression is defined, and is a 2 × 3 matrix. (c) Expression is defined, and the result is a 3 × 6 matrix. Problem 2.2 Consider the matrices 2 4 9 3 2 0 1 −7 2 A = ; B = ; C = −3 0 −4 6 5 3 0 4 5 2 1 2 −2 1 8 3 2 0 3 0 3 2 a b c 3 D = 4 3 0 2 5 ; E = 4 −5 1 1 5 ; F = 4 b a b 5 4 −6 3 7 6 2 c b a and evaluate each of the following expressions. (a) A(BC) (b) Tr(4ET − D) (c) Tr(FFT ) Answer 58 22 (a) A(BC) = . −50 226 (b) Tr(4ET − D) = 4(3) − 1 = 11. (c) Tr(FFT ) = 3a2 + 4b2 + 2c2 Problem 2.3 Find all values of k, if any, that satisfy the following equation 2 1 2 0 3 2 2 3 [2 2 k] 4 2 0 3 5 4 2 5 = 0 0 3 1 k Answer k = −2; −10 Problem 2.4 A matrix is said to be an orthogonal matrix if its transpose is the same as its inverse.
    [Show full text]
  • Global Optimization: from Theory to Implementation
    Global Optimization: from Theory to Implementation Leo Liberti DEI, Politecnico di Milano, Piazza L. da Vinci 32, 20133 Milano, Italy Nelson Maculan COPPE, Universidade Federal do Rio de Janeiro, P.O. Box, 68511, 21941-972 Rio de Janeiro, Brazil DRAFT To Anne-Marie DRAFT Preface The idea for this book was born on the coast of Montenegro, in October 2003, when we were invited to the Sym-Op-Is Serbian Conference on Operations Research. During those days we talked about many optimization problems, going from discussion to implementation in a matter of minutes, reaping good profits from the whole \hands-on" process, and having a lot of fun in the meanwhile. All the wrong ideas were weeded out almost immediately by failed computational experiments, so we wasted little time on those. Unfortunately, translating ideas into programs is not always fast and easy, and moreover the amount of literature about the implementation of global optimization algorithm is scarce. The scope of this book is that of moving a few steps towards the system- atization of the path that goes from the invention to the implementation and testing of a global optimization algorithm. The works contained in this book have been written by various researchers working at academic or industrial institutions; some very well known, some less famous but expert nonetheless in the discipline of actually getting global optimization to work. The papers in this book underline two main developments in the imple- mentation side of global optimization: firstly, the introduction of symbolic manipulation algorithms and automatic techniques for carrying out algebraic transformations; and secondly, the relatively wide availability of extremely ef- ficient global optimization heuristics and metaheuristics that target large-scale nonconvex constrained optimization problems directly.
    [Show full text]
  • Linear Algebra
    Math 221: LINEAR ALGEBRA Chapter 8. Orthogonality §8-3. Positive Definite Matrices Le Chen1 Emory University, 2020 Fall (last updated on 11/10/2020) Creative Commons License (CC BY-NC-SA) 1 Slides are adapted from those by Karen Seyffarth from University of Calgary. Positive Definite Matrices Cholesky factorization – Square Root of a Matrix Definition An n × n matrix A is positive definite if it is symmetric and has positive eigenvalues, i.e., if λ is a eigenvalue of A, then λ > 0. Theorem If A is a positive definite matrix, then det(A) > 0 and A is invertible. Proof. Let λ1; λ2; : : : ; λn denote the (not necessarily distinct) eigenvalues of A. Since A is symmetric, A is orthogonally diagonalizable. In particular, A ∼ D, where D = diag(λ1; λ2; : : : ; λn). Similar matrices have the same determinant, so det(A) = det(D) = λ1λ2 ··· λn: Since A is positive definite, λi > 0 for all i, 1 ≤ i ≤ n; it follows that det(A) > 0, and therefore A is invertible. Positive Definite Matrices Theorem If A is a positive definite matrix, then det(A) > 0 and A is invertible. Proof. Let λ1; λ2; : : : ; λn denote the (not necessarily distinct) eigenvalues of A. Since A is symmetric, A is orthogonally diagonalizable. In particular, A ∼ D, where D = diag(λ1; λ2; : : : ; λn). Similar matrices have the same determinant, so det(A) = det(D) = λ1λ2 ··· λn: Since A is positive definite, λi > 0 for all i, 1 ≤ i ≤ n; it follows that det(A) > 0, and therefore A is invertible. Positive Definite Matrices Definition An n × n matrix A is positive definite if it is symmetric and has positive eigenvalues, i.e., if λ is a eigenvalue of A, then λ > 0.
    [Show full text]
  • Cholesky Decomposition
    Cholesky decomposition In linear algebra, the Cholesky decomposition or Cholesky factorization is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g. Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations.[1] Contents Statement LDL decomposition Example Applications Linear least squares Non-linear optimization Monte Carlo simulation Kalman filters Matrix inversion Computation The Cholesky algorithm The Cholesky–Banachiewicz and Cholesky–Crout algorithms Stability of the computation LDL decomposition Block variant Updating the decomposition Rank-one update Rank-one downdate Adding and Removing Rows and Columns Proof for positive semi-definite matrices Generalization Implementations in programming libraries See also Notes References External links History of science Information Computer code Use of the matrix in simulation Online calculators Statement The Cholesky decomposition of aHermitian positive-definite matrix A is a decomposition of the form where L is a lower triangular matrix with real and positive diagonal entries, and L* denotes the conjugate transpose of L. Every Hermitian positive-definite matrix (and thus also every real-valued symmetric positive-definite matrix) has a unique Cholesky decomposition.[2] If the matrix A is Hermitian and positive semi-definite, then it still has a decomposition of the form A = LL* if the diagonal entries of L are allowed to be zero.[3] When A has real entries, L has real entries as well, and the factorization may be writtenA = LLT.[4] The Cholesky decomposition is unique when A is positive definite; there is only one lower triangular matrix L with strictly positive diagonal entries such thatA = LL*.
    [Show full text]