Course Compendium, Applied Matrix Analysis

Total Page:16

File Type:pdf, Size:1020Kb

Course Compendium, Applied Matrix Analysis Course compendium, Applied matrix analysis Christopher Engstr¨om Karl Lundeng˚ard Sergei Silvestrov November 14, 2018 Contents 1 Review of basic linear algebra6 1.1 Notation and elementary matrix operations . .6 1.2 Linear equation systems . .8 1.2.1 Determinants . 10 1.3 Eigenvalues and eigenvectors . 13 1.4 Exercises . 14 2 Matrix factorization and canonical forms 16 2.1 Important types of matrices and some useful properties . 17 2.1.1 Triangular and Hessenberg matrices . 17 2.1.2 Hermitian matrices . 18 2.1.3 Unitary matrices . 18 2.1.4 Positive definite matrices . 19 2.2 Factorization . 19 2.2.1 Spectral factorization . 20 2.2.2 Rank factorization . 20 2.2.3 LU and Cholesky factorization . 21 2.2.4 QR factorization . 22 2.2.5 Canonical forms . 22 2.2.6 Reduced row echelon form . 23 2.2.7 Jordan normal form . 23 2.2.8 Singular value decomposition (SVD) . 24 2.3 Applications . 25 2.3.1 Portfolio evaluation using Monte Carlo simulation . 25 3 Non-negative matrices and graphs 27 3.1 Introduction to graphs . 27 3.2 Connectivity and irreducible matrices . 30 3.3 The Laplacian matrix . 33 3.4 Exercises . 35 3.5 Applications . 35 3.5.1 Shortest path . 35 3.5.2 Resistance distance . 36 4 Perron-Frobenius theory 39 4.1 Perron-Frobenius for non negative matrices . 39 4.2 Exercises . 42 4.3 Applications . 43 4.3.1 Leontief input-output model . 43 5 Stochastic matrices and Perron-Frobenius 45 5.1 Stochastic matrices and Markov chains . 45 1 5.2 Irreducible, primitive stochastic matrices . 48 5.3 Irreducible, imprimitive stochastic matrices . 49 5.4 Reducible, stochastic matrices . 50 5.5 Hitting times and hitting probabilities . 52 5.6 A short look at continuous time Markov chains . 54 5.7 Exercises . 56 5.8 Applications . 57 5.8.1 Return to the Leontief model . 57 6 Linear spaces and projections 59 6.1 Linear spaces . 59 6.2 Inner product spaces . 65 6.3 Projections . 69 6.4 Applications . 73 6.4.1 Fourier approximation . 73 6.4.2 Finding a QR decomposition using the Gram-Schmidt process . 74 7 Principal component analysis (PCA) and dimension reduction 77 7.1 Mean, variance and covariance . 77 7.2 Principal component analysis . 81 7.3 Choosing the number of dimensions . 83 8 Linear transformations 86 8.1 Linear transformations in geometry . 89 8.2 surjection,injection and combined transformations . 91 8.3 Application: Transformations in computer graphics and homogeneous co- ordinates . 92 8.4 Kernel and image . 95 8.5 Isomorphisms . 98 9 Regression and the Least Square Method 100 9.1 Multiple linear regression . 102 9.1.1 Single linear regression . 102 9.1.2 Fitting several variables . 103 9.2 Finding the coefficients . 105 9.3 Weighted Least Squares Method (WLS) . 106 9.4 Non-linear regression . 107 9.5 Pseudoinverses . 108 10 Matrix functions 110 10.1 Matrix power function . 110 10.1.1 Calculating An ............................. 111 10.2 Matrix root function . 113 10.2.1 Root of a diagonal matrix . 113 2 10.2.2 Finding the square root of a matrix using the Jordan canonical form114 10.3 Matrix polynomials . 115 10.3.1 The Cayley-Hamilton theorem . 116 10.3.2 Calculating matrix polynomials . 119 10.3.3 Companion matrices . 120 10.4 Exponential matrix . 122 10.4.1 Solving matrix differential equations . 124 10.5 General matrix functions . 125 11 Matrix equations 126 11.1 Tensor products . 126 11.2 Solving matrix equations . 128 12 Numerical methods for computing eigenvalues and eigenvectors 131 12.1 The Power method . 131 12.1.1 Inverse Power method . 134 12.2 The QR-method . 136 12.3 Applications . 139 12.3.1 PageRank . 139 Index 143 3 4 Notation v vector (usually represented by a row or column matrix) jvj length (Euclidean norm) of a vector kvk norm of a vector M matrix M> transpose M−1 inverse M∗ pseudoinverse M complex conjugate MH Hermitian transpose Mk the k:th power of matrix M, see section 10.1 mi;j The element on the i:th row and j:th column of M (k) k mi;j The element on the i:th row and j:th column of M Mi: i:th row of M M:j j:th column of M ⊗ Kroenecker (tensor) product a scalar jaj absolute value of a scalar h·; ·i inner product (scalar product) Z set of integers + Z set of positive integers R set of real numbers C set of complex numbers Mm×n(K) set of matrices with dimension m × n and elements from the set K 5 1 Review of basic linear algebra The reader should already be familiar with the concepts and content in this chapter, instead it serves more as a reminder and overview of what we already know. We will start by giving some notation used throughout the book as well as reminding ourselves of the most basic matrix operations. We will then take a look at linear systems, eigenvalues and eigenvectors and related concepts as seen in elementary courses in linear algebra. While the methods presented here works, they are often very slow or unstable when used on a computer. Much of our later work will relate to how to find better ways to find these or similar quantities or give other examples of where and why they are useful. 1.1 Notation and elementary matrix operations We will here go through the most elementary matrix operations as well as give the no- tation which will be used throughout the book. We remind ourselves that a matrix is a rectangular array of numbers, symbols or expressions. Some example of matrices can be seen below: 1 0 A = B = 0 1 − i 0 4 2 1 −23 1 0 0 C = −2 3 D = 4 5 0 3 0 3 1 We note that matrices need not be square as in C and D, we also note that although matrices can be complex valued, in most of our examples or practical applications we need only to work with real valued matrices. • We denote every element of A as ai;j, where i is it's row number and j is it's column number. For example looking at C we have c3;2 = 1. • The size of a matrix is the number of rows and columns (or the index of the bottom right element): We say that C above is a 3 × 2 matrix. • A matrix with only one row or column is called a row vector or column vector respectively. We usually denote vectors with an arrow v and we assume all vectors are column vectors unless stated otherwise. • The diagonal of a matrix is the elements diagonally from the top left corner towards the bottom left (a1;1; a2;2; : : : an;n). A matrix where all elements except those on the diagonal is zero is called a diagonal matrix. For example A is a diagonal matrix, but so is D although we will usually only consider square diagonal matrices. • The trace of a matrix is the sum of the elements on the diagonal. 6 Matrix addition Add every element in the first matrix with corresponding element in the second matrix. Both matrices need to be of the same size for addition to be defined. 1 0 1 −2 1 + 1 0 − 2 2 −2 + = = 0 2 2 −1 0 + 2 2 − 1 2 1 Although addition between a scalar and a matrix is undefined, sometimes authors write it as such in which case they usually means to add the scalar to every element of the matrix as if the scalar was a matrix of the same size with every element equal to the constant. The general definition of matrix addition is: Let A = B + C where all three matrices have the same number of rows and columns. Then ai;j = bi;j + ci;j for every element in A. Matrix multiplication Given the two matrices A and B we get the product AB as the matrix whose elements ei;j are found by multiplying the elements in row i of A with the elements of column j of B and adding the results. 23 2 0 13 23201 3 211 10 7 143 61234 7 61234 7 6 5 9 16 247 6 7 6 7 = 6 7 40 1 2 15 40121 5 4 1 4 8 9 5 0 0 1 3 0013 0 1 5 10 For the colored element we have:1 · 3 + 2 · 1 + 3 · 0 + 4 · 0 = 5 We note that we need the number of columns in A to be the same as the number of rows in B but A can have any number of rows and B can have any number of columns. Also we have: • Generally AB 6= BA, why? • The size of AB is the number of rows of A times the number of columns of B. • The Identity matrix I is the matrix with ones on the diagonal and zero elsewhere. For the Identity matrix we have: AI = IA = A. Multiplying a scalar with a matrix is done by multiplying the scalar with every element of the matrix. 1 0 3 · 1 3 · 0 3 0 3 = = 0 2 3 · 0 3 · 2 0 6 The general definition of matrix multiplication is: Let A = BC then j X ai;j = bi;kck;j k=1 for each element in A. 7 Matrix transpose The transpose of a matrix is obtained by flipping the rows and columns such that the elements of the first row is now the elements of the first column.
Recommended publications
  • Eigenvalues of Euclidean Distance Matrices and Rs-Majorization on R2
    Archive of SID 46th Annual Iranian Mathematics Conference 25-28 August 2015 Yazd University 2 Talk Eigenvalues of Euclidean distance matrices and rs-majorization on R pp.: 1{4 Eigenvalues of Euclidean Distance Matrices and rs-majorization on R2 Asma Ilkhanizadeh Manesh∗ Department of Pure Mathematics, Vali-e-Asr University of Rafsanjan Alemeh Sheikh Hoseini Department of Pure Mathematics, Shahid Bahonar University of Kerman Abstract Let D1 and D2 be two Euclidean distance matrices (EDMs) with correspond- ing positive semidefinite matrices B1 and B2 respectively. Suppose that λ(A) = ((λ(A)) )n is the vector of eigenvalues of a matrix A such that (λ(A)) ... i i=1 1 ≥ ≥ (λ(A))n. In this paper, the relation between the eigenvalues of EDMs and those of the 2 corresponding positive semidefinite matrices respect to rs, on R will be investigated. ≺ Keywords: Euclidean distance matrices, Rs-majorization. Mathematics Subject Classification [2010]: 34B15, 76A10 1 Introduction An n n nonnegative and symmetric matrix D = (d2 ) with zero diagonal elements is × ij called a predistance matrix. A predistance matrix D is called Euclidean or a Euclidean distance matrix (EDM) if there exist a positive integer r and a set of n points p1, . , pn r 2 2 { } such that p1, . , pn R and d = pi pj (i, j = 1, . , n), where . denotes the ∈ ij k − k k k usual Euclidean norm. The smallest value of r that satisfies the above condition is called the embedding dimension. As is well known, a predistance matrix D is Euclidean if and 1 1 t only if the matrix B = − P DP with P = I ee , where I is the n n identity matrix, 2 n − n n × and e is the vector of all ones, is positive semidefinite matrix.
    [Show full text]
  • Boundary Value Problems on Weighted Paths
    Introduction and basic concepts BVP on weighted paths Bibliography Boundary value problems on a weighted path Angeles Carmona, Andr´esM. Encinas and Silvia Gago Depart. Matem`aticaAplicada 3, UPC, Barcelona, SPAIN Midsummer Combinatorial Workshop XIX Prague, July 29th - August 3rd, 2013 MCW 2013, A. Carmona, A.M. Encinas and S.Gago Boundary value problems on a weighted path Introduction and basic concepts BVP on weighted paths Bibliography Outline of the talk Notations and definitions Weighted graphs and matrices Schr¨odingerequations Boundary value problems on weighted graphs Green matrix of the BVP Boundary Value Problems on paths Paths with constant potential Orthogonal polynomials Schr¨odingermatrix of the weighted path associated to orthogonal polynomials Two-side Boundary Value Problems in weighted paths MCW 2013, A. Carmona, A.M. Encinas and S.Gago Boundary value problems on a weighted path Introduction and basic concepts Schr¨odingerequations BVP on weighted paths Definition of BVP Bibliography Weighted graphs A weighted graphΓ=( V ; E; c) is composed by: V is a set of elements called vertices. E is a set of elements called edges. c : V × V −! [0; 1) is an application named conductance associated to the edges. u, v are adjacent, u ∼ v iff c(u; v) = cuv 6= 0. X The degree of a vertex u is du = cuv . v2V c34 u4 u1 c12 u2 c23 u3 c45 c35 c27 u5 c56 u7 c67 u6 MCW 2013, A. Carmona, A.M. Encinas and S.Gago Boundary value problems on a weighted path Introduction and basic concepts Schr¨odingerequations BVP on weighted paths Definition of BVP Bibliography Matrices associated with graphs Definition The weighted Laplacian matrix of a weighted graph Γ is defined as di if i = j; (L)ij = −cij if i 6= j: c34 u4 u c u c u 1 12 2 23 3 0 d1 −c12 0 0 0 0 0 1 c B −c12 d2 −c23 0 0 0 −c27 C 45 B C c B 0 −c23 d3 −c34 −c35 0 0 C 35 B C c27 L = B 0 0 −c34 d4 −c45 0 0 C B C u5 B 0 0 −c35 −c45 d5 −c56 0 C c56 @ 0 0 0 0 −c56 d6 −c67 A 0 −c27 0 0 0 −c67 d7 u7 c67 u6 MCW 2013, A.
    [Show full text]
  • Polynomial Approximation Algorithms for Belief Matrix Maintenance in Identity Management
    Polynomial Approximation Algorithms for Belief Matrix Maintenance in Identity Management Hamsa Balakrishnan, Inseok Hwang, Claire J. Tomlin Dept. of Aeronautics and Astronautics, Stanford University, CA 94305 hamsa,ishwang,[email protected] Abstract— Updating probabilistic belief matrices as new might be constrained to some prespecified (but not doubly- observations arrive, in the presence of noise, is a critical part stochastic) row and column sums. This paper addresses the of many algorithms for target tracking in sensor networks. problem of updating belief matrices by scaling in the face These updates have to be carried out while preserving sum constraints, arising for example, from probabilities. This paper of uncertainty in the system and the observations. addresses the problem of updating belief matrices to satisfy For example, consider the case of the belief matrix for a sum constraints using scaling algorithms. We show that the system with three objects (labelled 1, 2 and 3). Suppose convergence behavior of the Sinkhorn scaling process, used that, at some instant, we are unsure about their identities for scaling belief matrices, can vary dramatically depending (tagged X, Y and Z) completely, and our belief matrix is on whether the prior unscaled matrix is exactly scalable or only almost scalable. We give an efficient polynomial-time algo- a 3 × 3 matrix with every element equal to 1/3. Let us rithm based on the maximum-flow algorithm that determines suppose the we receive additional information that object whether a given matrix is exactly scalable, thus determining 3 is definitely Z. Then our prior, but constraint violating the convergence properties of the Sinkhorn scaling process.
    [Show full text]
  • On the Eigenvalues of Euclidean Distance Matrices
    “main” — 2008/10/13 — 23:12 — page 237 — #1 Volume 27, N. 3, pp. 237–250, 2008 Copyright © 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam On the eigenvalues of Euclidean distance matrices A.Y. ALFAKIH∗ Department of Mathematics and Statistics University of Windsor, Windsor, Ontario N9B 3P4, Canada E-mail: [email protected] Abstract. In this paper, the notion of equitable partitions (EP) is used to study the eigenvalues of Euclidean distance matrices (EDMs). In particular, EP is used to obtain the characteristic poly- nomials of regular EDMs and non-spherical centrally symmetric EDMs. The paper also presents methods for constructing cospectral EDMs and EDMs with exactly three distinct eigenvalues. Mathematical subject classification: 51K05, 15A18, 05C50. Key words: Euclidean distance matrices, eigenvalues, equitable partitions, characteristic poly- nomial. 1 Introduction ( ) An n ×n nonzero matrix D = di j is called a Euclidean distance matrix (EDM) 1, 2,..., n r if there exist points p p p in some Euclidean space < such that i j 2 , ,..., , di j = ||p − p || for all i j = 1 n where || || denotes the Euclidean norm. i , ,..., Let p , i ∈ N = {1 2 n}, be the set of points that generate an EDM π π ( , ,..., ) D. An m-partition of D is an ordered sequence = N1 N2 Nm of ,..., nonempty disjoint subsets of N whose union is N. The subsets N1 Nm are called the cells of the partition. The n-partition of D where each cell consists #760/08. Received: 07/IV/08. Accepted: 17/VI/08. ∗Research supported by the Natural Sciences and Engineering Research Council of Canada and MITACS.
    [Show full text]
  • Smith Normal Formal of Distance Matrix of Block Graphs∗†
    Ann. of Appl. Math. 32:1(2016); 20-29 SMITH NORMAL FORMAL OF DISTANCE MATRIX OF BLOCK GRAPHS∗y Jing Chen1;2,z Yaoping Hou2 (1. The Center of Discrete Math., Fuzhou University, Fujian 350003, PR China; 2. School of Math., Hunan First Normal University, Hunan 410205, PR China) Abstract A connected graph, whose blocks are all cliques (of possibly varying sizes), is called a block graph. Let D(G) be its distance matrix. In this note, we prove that the Smith normal form of D(G) is independent of the interconnection way of blocks and give an explicit expression for the Smith normal form in the case that all cliques have the same size, which generalize the results on determinants. Keywords block graph; distance matrix; Smith normal form 2000 Mathematics Subject Classification 05C50 1 Introduction Let G be a connected graph (or strong connected digraph) with vertex set f1; 2; ··· ; ng. The distance matrix D(G) is an n × n matrix in which di;j = d(i; j) denotes the distance from vertex i to vertex j: Like the adjacency matrix and Lapla- cian matrix of a graph, D(G) is also an integer matrix and there are many results on distance matrices and their applications. For distance matrices, Graham and Pollack [10] proved a remarkable result that gives a formula of the determinant of the distance matrix of a tree depend- ing only on the number n of vertices of the tree. The determinant is given by det D = (−1)n−1(n − 1)2n−2: This result has attracted much interest in algebraic graph theory.
    [Show full text]
  • Variants of the Graph Laplacian with Applications in Machine Learning
    Variants of the Graph Laplacian with Applications in Machine Learning Sven Kurras Dissertation zur Erlangung des Grades des Doktors der Naturwissenschaften (Dr. rer. nat.) im Fachbereich Informatik der Fakult¨at f¨urMathematik, Informatik und Naturwissenschaften der Universit¨atHamburg Hamburg, Oktober 2016 Diese Promotion wurde gef¨ordertdurch die Deutsche Forschungsgemeinschaft, Forschergruppe 1735 \Structural Inference in Statistics: Adaptation and Efficiency”. Betreuung der Promotion durch: Prof. Dr. Ulrike von Luxburg Tag der Disputation: 22. M¨arz2017 Vorsitzender des Pr¨ufungsausschusses: Prof. Dr. Matthias Rarey 1. Gutachterin: Prof. Dr. Ulrike von Luxburg 2. Gutachter: Prof. Dr. Wolfgang Menzel Zusammenfassung In s¨amtlichen Lebensbereichen finden sich Graphen. Zum Beispiel verbringen Menschen viel Zeit mit der Kantentraversierung des Internet-Graphen. Weitere Beispiele f¨urGraphen sind soziale Netzwerke, ¨offentlicher Nahverkehr, Molek¨ule, Finanztransaktionen, Fischernetze, Familienstammb¨aume,sowie der Graph, in dem alle Paare nat¨urlicher Zahlen gleicher Quersumme durch eine Kante verbunden sind. Graphen k¨onnendurch ihre Adjazenzmatrix W repr¨asentiert werden. Dar¨uber hinaus existiert eine Vielzahl alternativer Graphmatrizen. Viele strukturelle Eigenschaften von Graphen, beispielsweise ihre Kreisfreiheit, Anzahl Spannb¨aume,oder Random Walk Hitting Times, spiegeln sich auf die ein oder andere Weise in algebraischen Eigenschaften ihrer Graphmatrizen wider. Diese grundlegende Verflechtung erlaubt das Studium von Graphen unter Verwendung s¨amtlicher Resultate der Linearen Algebra, angewandt auf Graphmatrizen. Spektrale Graphentheorie studiert Graphen insbesondere anhand der Eigenwerte und Eigenvektoren ihrer Graphmatrizen. Dabei ist vor allem die Laplace-Matrix L = D − W von Bedeutung, aber es gibt derer viele Varianten, zum Beispiel die normalisierte Laplacian, die vorzeichenlose Laplacian und die Diplacian. Die meisten Varianten basieren auf einer \syntaktisch kleinen" Anderung¨ von L, etwa D +W anstelle von D −W .
    [Show full text]
  • Arxiv:1803.06211V1 [Math.NA] 16 Mar 2018
    A NUMERICAL MODEL FOR THE CONSTRUCTION OF FINITE BLASCHKE PRODUCTS WITH PREASSIGNED DISTINCT CRITICAL POINTS CHRISTER GLADER AND RAY PORN¨ Abstract. We present a numerical model for determining a finite Blaschke product of degree n + 1 having n preassigned distinct critical points z1; : : : ; zn in the complex (open) unit disk D. The Blaschke product is uniquely determined up to postcomposition with conformal automor- phisms of D. The proposed method is based on the construction of a sparse nonlinear system where the data dependency is isolated to two vectors and on a certain transformation of the critical points. The effi- ciency and accuracy of the method is illustrated in several examples. 1. Introduction A finite Blaschke product of degree n is a rational function of the form n Y z − αj (1.1) B(z) = c ; c; α 2 ; jcj = 1; jα j < 1 ; 1 − α z j C j j=1 j which thereby has all its zeros in the open unit disc D, all poles outside the closed unit disc D and constant modulus jB(z)j = 1 on the unit circle T. The overbar in (1.1) and in the sequel stands for complex conjugation. The finite Blaschke products of degree n form a subset of the rational functions of degree n which are unimodular on T. These functions are given by all fractions n ~ a0 + a1 z + ::: + an z (1.2) B(z) = n ; a0; :::; an 2 C : an + an−1 z + ::: + a0 z An irreducible rational function of form (1.2) is a finite Blaschke product when all its zeros are in D.
    [Show full text]
  • Distance Matrix and Laplacian of a Tree with Attached Graphs R.B
    Linear Algebra and its Applications 411 (2005) 295–308 www.elsevier.com/locate/laa Distance matrix and Laplacian of a tree with attached graphs R.B. Bapat ∗ Indian Statistical Institute, Delhi Centre, 7 S.J.S.S. Marg, New Delhi 110 016, India Received 5 November 2003; accepted 22 June 2004 Available online 7 August 2004 Submitted by S. Hedayat Abstract A tree with attached graphs is a tree, together with graphs defined on its partite sets. We introduce the notion of incidence matrix, Laplacian and distance matrix for a tree with attached graphs. Formulas are obtained for the minors of the incidence matrix and the Laplacian, and for the inverse and the determinant of the distance matrix. The case when the attached graphs themselves are trees is studied more closely. Several known results, including the Matrix Tree theorem, are special cases when the tree is a star. The case when the attached graphs are paths is also of interest since it is related to the transportation problem. © 2004 Elsevier Inc. All rights reserved. AMS classification: 15A09; 15A15 Keywords: Tree; Distance matrix; Resistance distance; Laplacian matrix; Determinant; Transportation problem 1. Introduction Minors of matrices associated with a graph has been an area of considerable inter- est, starting with the celebrated Matrix Tree theorem of Kirchhoff which asserts that any cofactor of the Laplacian matrix equals the number of spanning trees in the ∗ Fax: +91 11 26856779. E-mail address: [email protected] 0024-3795/$ - see front matter 2004 Elsevier Inc. All rights reserved. doi:10.1016/j.laa.2004.06.017 296 R.B.
    [Show full text]
  • "Distance Measures for Graph Theory"
    Distance measures for graph theory : Comparisons and analyzes of different methods Dissertation presented by Maxime DUYCK for obtaining the Master’s degree in Mathematical Engineering Supervisor(s) Marco SAERENS Reader(s) Guillaume GUEX, Bertrand LEBICHOT Academic year 2016-2017 Acknowledgments First, I would like to thank my supervisor Pr. Marco Saerens for his presence, his advice and his precious help throughout the realization of this thesis. Second, I would also like to thank Bertrand Lebichot and Guillaume Guex for agreeing to read this work. Next, I would like to thank my parents, all my family and my friends to have accompanied and encouraged me during all my studies. Finally, I would thank Malian De Ron for creating this template [65] and making it available to me. This helped me a lot during “le jour et la nuit”. Contents 1. Introduction 1 1.1. Context presentation .................................. 1 1.2. Contents .......................................... 2 2. Theoretical part 3 2.1. Preliminaries ....................................... 4 2.1.1. Networks and graphs .............................. 4 2.1.2. Useful matrices and tools ........................... 4 2.2. Distances and kernels on a graph ........................... 7 2.2.1. Notion of (dis)similarity measures ...................... 7 2.2.2. Kernel on a graph ................................ 8 2.2.3. The shortest-path distance .......................... 9 2.3. Kernels from distances ................................. 9 2.3.1. Multidimensional scaling ............................ 9 2.3.2. Gaussian mapping ............................... 9 2.4. Similarity measures between nodes .......................... 9 2.4.1. Katz index and its Leicht’s extension .................... 10 2.4.2. Commute-time distance and Euclidean commute-time distance .... 10 2.4.3. SimRank similarity measure .........................
    [Show full text]
  • Arxiv:1912.12366V1 [Quant-Ph] 27 Dec 2019
    Approximate Graph Spectral Decomposition with the Variational Quantum Eigensolver Josh Paynea and Mario Sroujia aDepartment of Computer Science, Stanford University ABSTRACT Spectral graph theory is a branch of mathematics that studies the relationships between the eigenvectors and eigenvalues of Laplacian and adjacency matrices and their associated graphs. The Variational Quantum Eigen- solver (VQE) algorithm was proposed as a hybrid quantum/classical algorithm that is used to quickly determine the ground state of a Hamiltonian, and more generally, the lowest eigenvalue of a matrix M 2 Rn×n. There are many interesting problems associated with the spectral decompositions of associated matrices, such as par- titioning, embedding, and the determination of other properties. In this paper, we will expand upon the VQE algorithm to analyze the spectra of directed and undirected graphs. We evaluate runtime and accuracy compar- isons (empirically and theoretically) between different choices of ansatz parameters, graph sizes, graph densities, and matrix types, and demonstrate the effectiveness of our approach on Rigetti's QCS platform on graphs of up to 64 vertices, finding eigenvalues of adjacency and Laplacian matrices. We finally make direct comparisons to classical performance with the Quantum Virtual Machine (QVM) in the appendix, observing a superpolynomial runtime improvement of our algorithm when run using a quantum computer.∗ Keywords: Quantum Computing, Variational Quantum Eigensolver, Graph, Spectral Graph Theory, Ansatz, Quantum Algorithms 1. INTRODUCTION 1.1 Preliminaries Quantum computing is an emerging paradigm in computation which leverages the quantum mechanical phe- nomena of superposition and entanglement to create states that scale exponentially with number of qubits, or quantum bits.
    [Show full text]
  • Newton's Method for the Matrix Square Root*
    MATHEMATICS OF COMPUTATION VOLUME 46, NUMBER 174 APRIL 1986, PAGES 537-549 Newton's Method for the Matrix Square Root* By Nicholas J. Higham Abstract. One approach to computing a square root of a matrix A is to apply Newton's method to the quadratic matrix equation F( X) = X2 - A =0. Two widely-quoted matrix square root iterations obtained by rewriting this Newton iteration are shown to have excellent mathematical convergence properties. However, by means of a perturbation analysis and supportive numerical examples, it is shown that these simplified iterations are numerically unstable. A further variant of Newton's method for the matrix square root, recently proposed in the literature, is shown to be, for practical purposes, numerically stable. 1. Introduction. A square root of an n X n matrix A with complex elements, A e C"x", is a solution X e C"*" of the quadratic matrix equation (1.1) F(X) = X2-A=0. A natural approach to computing a square root of A is to apply Newton's method to (1.1). For a general function G: CXn -* Cx", Newton's method for the solution of G(X) = 0 is specified by an initial approximation X0 and the recurrence (see [14, p. 140], for example) (1.2) Xk+l = Xk-G'{XkylG{Xk), fc = 0,1,2,..., where G' denotes the Fréchet derivative of G. Identifying F(X+ H) = X2 - A +(XH + HX) + H2 with the Taylor series for F we see that F'(X) is a linear operator, F'(X): Cx" ^ C"x", defined by F'(X)H= XH+ HX.
    [Show full text]
  • An Introduction to the Normalized Laplacian
    An introduction to the normalized Laplacian Steve Butler Iowa State University MathButler.org A = f2; -1; -1g + 2 -1 -1 ∗ 2 -1 -1 2 4 1 1 2 4 -2 -2 -1 1 -2 -2 -1 -2 1 1 -1 1 -2 -2 -1 -2 1 1 Are there any other examples where A + A = A ∗ A (as a multi-set)? Yes! A = f0; 0; : : : ; 0g or A = f2; 2; : : : ; 2g Are there any other nontrivial examples? What does this have to do with this talk? Matrices are arrays of numbers Graphs are collections of objects with benefits. (vertices) and relations between 0 1 them (edges). 0 1 1 1 B1 0 1 0C B C 2 A = B C @1 1 0 0A 1 4 1 0 0 0 3 Example: eigenvalues are λ where for some x 6= 0 we have Graphs are very universal and Ax = λx. can model just about everything. f2:17:::; 0:31:::; -1; -1:48:::g Matrices are arrays of numbers Graphs are collections of objects with benefits. (vertices) and relations between 0 1 them (edges). 0 1 1 1 B1 0 1 0C B C 2 A = B C @1 1 0 0A 1 4 1 0 0 0 3 Example: eigenvalues are λ where for some x 6= 0 we have Graphs are very universal and Ax = λx. can model just about everything. f2:17:::; 0:31:::; -1; -1:48:::g Matrices are arrays of numbers Graphs are collections of objects with benefits. (vertices) and relations between 0 1 them (edges).
    [Show full text]