A Note on Frobenius Inner Product and the M-Distance Matrices of a Tree

Total Page:16

File Type:pdf, Size:1020Kb

A Note on Frobenius Inner Product and the M-Distance Matrices of a Tree Malaya Journal of Matematik, Vol. 8, No. 3, 1321-1327, 2020 https://doi.org/10.26637/MJM0803/0103 A note on Frobenius inner product and the m-distance matrices of a tree P.A. Asharaf1* and Bindhu K. Thomas2 Abstract The m-distance matrix Dm of a simple connected undirected graph has an important role in computing the distance matrix D of the graph from the powers of the adjacency matrix using Hadamard product. This paper shows that for an undirected tree T with diameter d, fD0:D1;:::;Ddg is an orthogonal basis for the space spanned by the binary equivalent matrices of the first d + 1 powers of the adjacency matrix of T and it gives an invertible conversion matrix for finding the m-distance matrix of T using Frobenius inner product on matrices. Keywords Adjacency matrix, Distance matrix, Binary matrix , Diameter, Hadamard product, m-distance matrix , Frobenius inner product, Frobenius norm. AMS Subject Classification 05C12, 05C40, 05C50, 05C62, 05B20, 46C05, 15A60. 1Mathematics Research Centre, Mary Matha Arts and Science College, Mananthavady-670645, Kerala, India. 2Department of Mathematics, Mary Matha Arts and Science College, Mananthavady-670645, Kerala, India. *Corresponding author: 1 [email protected]; [email protected] Article History: Received 12 April 2020; Accepted 19 August 2020 ©2020 MJM. Contents Definition 1.2. ([3]) The diameter of a graph G is the maxi- mum distance between any two vertices of G and it is denoted 1 Introduction......................................1321 by Diam(G). 2 Preliminaries.....................................1322 Definition 1.3. (Hadamard product [4]) Consider the vector m× n 3 Main Resuls......................................1322 space R of all m × n real matrices over the real field m× n 4 Illustration.......................................1325 R. For P; F 2 R , the Hadamard product ◦ is a binary operation on m× n defined by, 5 Conclusion.......................................1327 R References.......................................1327 (P ◦ F)i j := (P)i j(F)i j; 8 i; j: Let B = f0;1g and let Bm×n denote the set of all binary matri- 1. Introduction m×n ces in R . Consider a simple, connected, undirected graph G = (V;E) Then for P; F 2 Bm× n, of order n with vertex set V and edge set E throughout this 1; if pi j = 1 and fi j = 1 paper unless otherwise specified. Let AG be the n × n adja- (P ◦ F)i j = th m th 0; otherwise cency matrix [1] of G. Then i j entry of AG (m power of AG), represent the number of walks of length m between the Definition 1.4. (Frobenius inner product [5]) Consider the vertices v and v of G. n× n i j vector space R over the field R. Then the Frobenius inner n× n n× n product h;iF : × ! is defined by, Definition 1.1. ([2]) The distance matrix, D = (di j) of G is R R R defined as, n n n n hP;Qi = p q = (P ◦ Q) = Tr(PQT ); F ∑ ∑ i j i j ∑ ∑ i j d(vi;v j) , if i 6= j i=1 j=1 i=1 j=1 di j = 0 , if i = j n×n where P;Q 2 R . The Frobenius norm k · k induced by this n×n 1 n×n Where, d(vi;v j) is the distance between the vertices vi and v j. inner product on R is, kPkF = (hP;PiF ) 2 , P 2 R . Then A note on Frobenius inner product and the m-distance matrices of a tree — 1322/1327 n×n the metric dF induced by this norm on R is dF (P;Q) = Remark 2.3. (i)D m 2 Bn×n, 8 m = 0;1;2;::: n×n kP − QkF ,P;Q 2 R : (ii)D 0 = In 1 1 T 2 Remark 1.5. (i) kPkF = (hP;PiF ) 2 = Tr(PP ) , P 2 (iii)D + D + ::: + D = n×n 0 1 d R 21 1 ::: 13 (ii) Let X;Y 2 1×n. Then hX;Yi = n x y and R F ∑i=1 i i 61 1 ::: 17 6 7 1 6::::::7 n ! 2 Jn = 6 7 = The matrix of ones 2 6::::::7 d f (X;Y) = kX −YkF = ∑(xi − yi) 6::::::7 i=1 4 5 1 1 ::: 1 which is the Euclidean distance between the vectors X and Y of n. Diam(G) = Maxfm : D 6= 0g R (iv) m m (iii) For P 2 Bn×n, 0; if i 6= j 1 ! (v)D i ◦ D j = ; for 0 ≤ i; j ≤ d: n n 2 Di; if i = j d (P;0) = kP − 0k = kPk = p2 n×n F F F ∑ ∑ i j i.e., fD0;D1;:::;Ddg is an orthogonal subset R i=1 j=1 with respect to the binary operation ◦: 2 Then (dF (P;0)) represents the number of 1’s in P. Theorem 2.4. ([6]) Let AG be the adjacency matrix of G = For P;Q 2 Bn×n, (V;E) with diameter d. Then b2 = fD0;D1;:::;Ddg is a Lin- n×n 1 early independent subset of the vector space R . n n ! 2 2 dF (P;Q) = kP − QkF = ∑ ∑(pi j − qi j) Theorem 2.5. ([6]) Let AG be the adjacency matrix of G = i=1 j=1 (m) (V;E) with diameter d. Then for 1 ≤ m ≤ d; Dm = AG − m−1 (m) (s) 2 d ∑ A ◦ A , where Dm is the m-distance matrix of Then (dF (P;Q)) represent the number of non identical entries s=0 G G in P and Q and the number of identical entries in P and Q is G. 2 2 n − (dF (P;Q)) 3. Main Resuls 2. Preliminaries Theorem 3.1. Let AG be the adjacency matrix of G = (V;E). Definition 2.1. ([6]) The function d : R ! B = f0;1g is de- Let d = diam(G): Then for 0 ≤ K; m ≤ d; fined by (m) Dm; if K = m Dk ◦ A = 0; a ≤ 0 G 0; if m < K: d(a) := ;a 2 1; otherwise R Proof. Case(i): K = m m×n Also, d : R ! Bm×n is defined by d(D) = (d(D))i j := (m) (m) m×n m Dm ◦ AG = (Dm)i j(AG )i j (d(di j)); 8 i; j and D 2 R : Then d(AG) 2 Bn×n, 8 m 2 i j f0;1;2;:::g and it is the equivalent binary matrix representa- (m) 1; if (D ) = 1 and (A ) = 1 Am A(m) = (Am) = m i j G i j tion of G. Let G d G . Then 0; otherwise 8 (m) 1; if there exist a walk of length m 1; if d(vi;v j) = m and (A )i j = 1 (m) < = G (AG )i j = from vi to v j in G 0; otherwise. : 0; otherwise (m) But d(vi;v j) = m ) (AG )i j = 1 Also, for C;F 2 Bm×n, d(C ◦ F) = d(C) ◦ d(F): (m) (m) Dm ◦ AG = (Dm)i j(AG )i j Definition 2.2. (m-distance matrix [6]) Consider the graph ) i j G = (V;E) with n vertices. Then the m- distance matrix D m 1; if d(v ;v ) = m of G is an n × n symmetric binary matrix defined by = i j = (D ) 0; otherwise m i j 1; if d(vi;v j) = m (m) (Dm)i j := 0; otherwise That is, (Dm ◦ AG )i j = Dm: Case(ii): K > m (m) (m) where, d(vi;v j) is the distance between the vertices vi and v j. (Dk ◦ AG )i j = (Dk)i j(AG )i j 1322 A note on Frobenius inner product and the m-distance matrices of a tree — 1323/1327 If (Dk)i j = 1; then d(vi;v j) = K: Lemma 3.4. Let G = (V;E) be a connected undirected graph That is, if there exist a shortest path of length K between of order n(n > 1). Let di j denote the distance between vi and vi and v j: But then this path cannot be traversed by any of v j. Then there always exist at least one walk between vi and (m) v j of length di j + 2r; 8 r = 0;1;2;:::. That is, the walk between vi and v j of length< K: Thus (AG )i j = 0: (* K > m) (di j+2r) (m) AG = 1;8r = 0;1;2;:::: ) (DK)i j(AG )i j = 0 i j (m) If (DK)i j = 0; then also (DK)i j(AG )i j = 0. Proof. Since G is a connected undirected graph, there exist (m) Therefore, (DK ◦ AG )i j = 0; 8 i; j; 8 K > m: at least one path in between vi and v j. Let Pi j : v0(= vi) − (m) v1 − v2 − ::: − vd −1 − vd (= v j) denote the shortest path ie., DK ◦ AG = 0 if K > m: Hence i j i j between vi and v j of length di j. If we traverse back and forth once along the last edge in P , then we get a walk v (= (m) Dm; if K = m i j 0 DK ◦ A = ; for K ≤ d: G 0; if m < K vi) − v1 − v2 − ::: − vdi j−1 − vdi j (= v j) − vdi j−1 − vdi j (= v j) of length di j + 2 between vi and v j. Also If we traverse back and forth twice along the last edge in Pi j, then we get a walk v0(= vi)−v1 −v2 −:::−vdi j−1 −vdi j −vdi j−1 −vdi j −vdi j−1 − Theorem 3.2.
Recommended publications
  • Math 317: Linear Algebra Notes and Problems
    Math 317: Linear Algebra Notes and Problems Nicholas Long SFASU Contents Introduction iv 1 Efficiently Solving Systems of Linear Equations and Matrix Op- erations 1 1.1 Warmup Problems . .1 1.2 Solving Linear Systems . .3 1.2.1 Geometric Interpretation of a Solution Set . .9 1.3 Vector and Matrix Equations . 11 1.4 Solution Sets of Linear Systems . 15 1.5 Applications . 18 1.6 Matrix operations . 20 1.6.1 Special Types of Matrices . 21 1.6.2 Matrix Multiplication . 22 2 Vector Spaces 25 2.1 Subspaces . 28 2.2 Span . 29 2.3 Linear Independence . 31 2.4 Linear Transformations . 33 2.5 Applications . 38 3 Connecting Ideas 40 3.1 Basis and Dimension . 40 3.1.1 rank and nullity . 41 3.1.2 Coordinate Vectors relative to a basis . 42 3.2 Invertible Matrices . 44 3.2.1 Elementary Matrices . 44 ii CONTENTS iii 3.2.2 Computing Inverses . 45 3.2.3 Invertible Matrix Theorem . 46 3.3 Invertible Linear Transformations . 47 3.4 LU factorization of matrices . 48 3.5 Determinants . 49 3.5.1 Computing Determinants . 49 3.5.2 Properties of Determinants . 51 3.6 Eigenvalues and Eigenvectors . 52 3.6.1 Diagonalizability . 55 3.6.2 Eigenvalues and Eigenvectors of Linear Transfor- mations . 56 4 Inner Product Spaces 58 4.1 Inner Products . 58 4.2 Orthogonal Complements . 59 4.3 QR Decompositions . 59 Nicholas Long Introduction In this course you will learn about linear algebra by solving a carefully designed sequence of problems. It is important that you understand every problem and proof.
    [Show full text]
  • Notes, Since the Quadratic Form Associated with an Inner Product (Kuk2 = Hu, Ui) Must Be Positive Def- Inite
    Bindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8 Spaces and bases 1 n I have two favorite vector spaces : R and the space Pd of polynomials of degree at most d. For Rn, we have a canonical basis: n R = spanfe1; e2; : : : ; eng; where ek is the kth column of the identity matrix. This basis is frequently convenient both for analysis and for computation. For Pd, an obvious- seeming choice of basis is the power basis: 2 d Pd = spanf1; x; x ; : : : ; x g: But this obvious-looking choice turns out to often be terrible for computation. Why? The short version is that powers of x aren't all that strongly linearly dependent, but we need to develop some more concepts before that short description will make much sense. The range space of a matrix or a linear map A is just the set of vectors y that can be written in the form y = Ax. If A is full (column) rank, then the columns of A are linearly independent, and they form a basis for the range space. Otherwise, A is rank-deficient, and there is a non-trivial null space consisting of vectors x such that Ax = 0. Rank deficiency is a delicate property2. For example, consider the matrix 1 1 A = : 1 1 This matrix is rank deficient, but the matrix 1 + δ 1 A^ = : 1 1 is not rank deficient for any δ 6= 0. Technically, the columns of A^ form a basis for R2, but we should be disturbed by the fact that A^ is so close to a singular matrix.
    [Show full text]
  • Linear Algebra) 2
    MATH 680 Computation Intensive Statistics December 3, 2018 Matrix Basics Contents 1 Rank (linear algebra) 2 1.1 Proofs that column rank = row rank . .2 1.2 Properties . .4 2 Kernel (linear algebra) 5 2.1 Properties . .6 2.2 Illustration . .9 3 Matrix Norm 12 3.1 Matrix norms induced by vector norms . 12 3.2 Entrywise matrix norms . 14 3.3 Schatten norms . 18 1 1 RANK (LINEAR ALGEBRA) 2 4 Projection 19 4.1 Properties and classification . 20 4.2 Orthogonal projections . 21 1 Rank (linear algebra) In linear algebra, the rank of a matrix A is the dimension of the vector space generated (or spanned) by its columns. This corresponds to the maximal number of linearly independent columns of A. This, in turn, is identical to the dimension of the space spanned by its rows. The column rank of A is the dimension of the column space of A, while the row rank of A is the dimension of the row space of A. A fundamental result in linear algebra is that the column rank and the row rank are always equal. 1.1 Proofs that column rank = row rank Let A be an m × n matrix with entries in the real numbers whose row rank is r. There- fore, the dimension of the row space of A is r. Let x1; x2; : : : ; xr be a basis of the row space of A. We claim that the vectors Ax1; Ax2; : : : ; Axr are linearly independent. To see why, consider a linear homogeneous relation involving these vectors with scalar coefficients c1; c2; : : : ; cr : 0 = c1Ax1 + c2Ax2 + ··· + crAxr = A(c1x1 + c2x2 + ··· + crxr) = Av; 1 RANK (LINEAR ALGEBRA) 3 where v = c1x1 + c2x2 + ··· + crxr: We make two observations: 1.
    [Show full text]
  • Inner Products and Orthogonality
    Advanced Linear Algebra – Week 5 Inner Products and Orthogonality This week we will learn about: • Inner products (and the dot product again), • The norm induced by the inner product, • The Cauchy–Schwarz and triangle inequalities, and • Orthogonality. Extra reading and watching: • Sections 1.3.4 and 1.4.1 in the textbook • Lecture videos 17, 18, 19, 20, 21, and 22 on YouTube • Inner product space at Wikipedia • Cauchy–Schwarz inequality at Wikipedia • Gram–Schmidt process at Wikipedia Extra textbook problems: ? 1.3.3, 1.3.4, 1.4.1 ?? 1.3.9, 1.3.10, 1.3.12, 1.3.13, 1.4.2, 1.4.5(a,d) ??? 1.3.11, 1.3.14, 1.3.15, 1.3.25, 1.4.16 A 1.3.18 1 Advanced Linear Algebra – Week 5 2 There are many times when we would like to be able to talk about the angle between vectors in a vector space V, and in particular orthogonality of vectors, just like we did in Rn in the previous course. This requires us to have a generalization of the dot product to arbitrary vector spaces. Definition 5.1 — Inner Product Suppose that F = R or F = C, and V is a vector space over F. Then an inner product on V is a function h·, ·i : V × V → F such that the following three properties hold for all c ∈ F and all v, w, x ∈ V: a) hv, wi = hw, vi (conjugate symmetry) b) hv, w + cxi = hv, wi + chv, xi (linearity in 2nd entry) c) hv, vi ≥ 0, with equality if and only if v = 0.
    [Show full text]
  • Approximating Functions in Clifford
    Approximating Clifford functions Approximating functions in Clifford algebras: What to do with negative eigenvalues? (Long version) Paul Leopardi Mathematical Sciences Institute, Australian National University. For presentation at AGACSE 2010 Amsterdam. June 2010 Approximating Clifford functions Motivation Functions in Clifford algebras are a special case of matrix functions, as can be seen via representation theory. The square root and logarithm functions, in particular, pose problems for the author of a general purpose library of Clifford algebra functions. This is partly because the principal square root and logarithm of a matrix do not exist for a matrix containing a negative eigenvalue. (Higham 2008) Approximating Clifford functions Problems 1. Define the square root and logarithm of a multivector in the case where the matrix representation has negative eigenvalues. 2. Predict or detect negative eigenvalues. Approximating Clifford functions Topics I Clifford algebras I Clifford algebras and transformations I Functions in Clifford algebras I Dealing with negative eigenvalues I Predicting negative eigenvalues? I Detecting negative eigenvalues Approximating Clifford functions Clifford algebras The exterior product n I For x and y in R , making angle θ , the exterior (outer) product x ^ y is a directed area in the plane of x and y : jx ^ yj = kxk kyk sin θ y - ¡ ¡ ¡ ¡ ¡ ¡ ¡ ¡ x ¡ x ^ y ¡ ¡ y ^ x ¡ ¡ ¡ ¡ ¡ x ¡ ¡ ¡ ¡ ¡ θ ¡ ¡ θ -¡ y (Grassmann 1844; Lasenby and Doran 1999; Lounesto 1997) Approximating Clifford functions Clifford algebras Properties of
    [Show full text]
  • Inner Product Spaces
    Part VI Inner Product Spaces 475 Section 27 The Dot Product in Rn Focus Questions By the end of this section, you should be able to give precise and thorough answers to the questions listed below. You may want to keep these questions in mind to focus your thoughts as you complete the section. What is the dot product of two vectors? Under what conditions is the dot • product defined? How do we find the angle between two nonzero vectors in Rn? • How does the dot product tell us if two vectors are orthogonal? • How do we define the length of a vector in any dimension and how can the • dot product be used to calculate the length? How do we define the distance between two vectors? • What is the orthogonal projection of a vector u in the direction of the vector • v and how do we find it? What is the orthogonal complement of a subspace W of Rn? • Application: Hidden Figures in Computer Graphics In video games, the speed at which a computer can render changing graphics views is vitally im- portant. To increase a computer’s ability to render a scene, programs often try to identify those parts of the images a viewer could see and those parts the viewer could not see. For example, in a scene involving buildings, the viewer could not see any images blocked by a solid building. In the mathematical world, this can be visualized by graphing surfaces. In Figure 27.1 we see a crude image of a house made up of small polygons (this is how programs generally represent surfaces).
    [Show full text]
  • Contents 3 Inner Product Spaces
    Linear Algebra (part 3) : Inner Product Spaces (by Evan Dummit, 2020, v. 2.00) Contents 3 Inner Product Spaces 1 3.1 Inner Product Spaces . 1 3.1.1 Inner Products on Real Vector Spaces . 1 3.1.2 Inner Products on Complex Vector Spaces . 3 3.1.3 Properties of Inner Products, Norms . 4 3.2 Orthogonality . 6 3.2.1 Orthogonality, Orthonormal Bases, and the Gram-Schmidt Procedure . 7 3.2.2 Orthogonal Complements and Orthogonal Projection . 9 3.3 Applications of Inner Products . 13 3.3.1 Least-Squares Estimates . 13 3.3.2 Fourier Series . 16 3.4 Linear Transformations and Inner Products . 18 3.4.1 Characterizations of Inner Products . 18 3.4.2 The Adjoint of a Linear Transformation . 20 3 Inner Product Spaces In this chapter we will study vector spaces having an additional kind of structure called an inner product, which generalizes the idea of the dot product of vectors in Rn, and which will allow us to formulate notions of length and angle in more general vector spaces. We dene inner products in real and complex vector spaces and establish some of their properties, including the celebrated Cauchy-Schwarz inequality, and survey some applications. We then discuss orthogonality of vectors and subspaces and in particular describe a method for constructing an orthonormal basis for any nite-dimensional inner product space, which provides an analogue of giving standard unit coordinate axes in Rn. Next, we discuss a pair of very important practical applications of inner products and orthogonality: computing least-squares approximations and approximating periodic functions with Fourier series.
    [Show full text]
  • A Generalized FFT for Clifford Algebras, 2003
    A generalized FFT for Clifford algebras Paul Leopardi [email protected] School of Mathematics, University of New South Wales. For presentation at ICIAM, Sydney, 2003-07-07. Real representation matrix R 2 8 16 8 16 http://glucat.sf.net – p.1/27 GFT for finite groups (Beth 1987, Diaconis and Rockmore 1990; Clausen and Baum 1993) The generalized Fourier transform (GFT) for a finite group G is the representation map D from the group algebra CG to a faithful complex matrix representation of G; such that D is the direct sum of a complete set of irreducible representations. n D : CG ! C(M); D = Dk; k=1 nM where Dk : CG ! C(mk); and mk = M: kX=1 The generalized FFT is any fast algorithm for the GFT. http://glucat.sf.net – p.2/27 GFT for supersolvable groups (Baum 1991; Clausen and Baum 1993) Definition 1. The 2-linear complexity L2(X); of a linear operator X counts non-zero additions A(X); and non-zero multiplications, except multiplications by 1 or −1: The GFT D; for supersolvable groups has L2(D) = O(jGjlog2jGj): http://glucat.sf.net – p.3/27 GFT for Clifford algebras (Hestenes and Sobczyck 1984; Wene 1992; Felsberg et al. 2001) The GFT for a real universal Clifford algebra, Rp;q; is the representation map P p;q from the real framed representation to the real matrix representation of Rp;q: P&(−q;p) N(p;q) P p;q : R ! R(2 ); where • &(a; b) := fa; a + 1; : : : ; bg n f0g: • P&(−q; p) is the power set of &(−q; p) , a set of index sets with cardinality 2p+q: • The real framed representation RP&(−q;p) is the set of maps from P&(−q; p) to R; isomorphic as a real vector space to the set of 2p+q tuples of real numbers indexed by subsets of &(−q; p): This is not the “discrete Clifford Fourier transform” of Felsberg, et al.
    [Show full text]
  • Mathematical Symbols
    List of mathematical symbols This is a list of symbols used in all branches ofmathematics to express a formula or to represent aconstant . A mathematical concept is independent of the symbol chosen to represent it. For many of the symbols below, the symbol is usually synonymous with the corresponding concept (ultimately an arbitrary choice made as a result of the cumulative history of mathematics), but in some situations, a different convention may be used. For example, depending on context, the triple bar "≡" may represent congruence or a definition. However, in mathematical logic, numerical equality is sometimes represented by "≡" instead of "=", with the latter representing equality of well-formed formulas. In short, convention dictates the meaning. Each symbol is shown both inHTML , whose display depends on the browser's access to an appropriate font installed on the particular device, and typeset as an image usingTeX . Contents Guide Basic symbols Symbols based on equality Symbols that point left or right Brackets Other non-letter symbols Letter-based symbols Letter modifiers Symbols based on Latin letters Symbols based on Hebrew or Greek letters Variations See also References External links Guide This list is organized by symbol type and is intended to facilitate finding an unfamiliar symbol by its visual appearance. For a related list organized by mathematical topic, see List of mathematical symbols by subject. That list also includes LaTeX and HTML markup, and Unicode code points for each symbol (note that this article doesn't
    [Show full text]
  • Summary of Frobenius Inner Product Space
    Inner Products Definition An inner product on a vector space V over F is a function that assigns a scalar x, y for every x, y V, such that for all Chapter 6 – Inner Product Spaces h i ∈ x, y, z V and c F : ∈ ∈ (a) x + z, y = x, y + z, y Per-Olof Persson h i h i h i (b) cx, y = c x, y [email protected] h i h i (c) x, y = y, x (complex conjugation) h i h i Department of Mathematics (d) x, x > 0 if x = 0 University of California, Berkeley h i 6 Math 110 Linear Algebra Example n For x = (a1, . , an) and y = (b1, . , bn) in F , the standard inner product is defined by n x, y = a b h i i i Xi=1 Inner Products Properties of Inner Product Spaces Example A vector space V over F with a specific inner product is called an For f, g V = C([0, 1]), an inner product is given by inner product space. If F = C, V is a complex inner product ∈ 1 f, g = f(t)g(t) dt. space, and if F = R, V is a real inner product space. h i 0 R Definition Theorem 6.1 For an inner product space V, x, y, z V, and c F : The conjugate transpose or adjoint of A Mm n(F ) is the n m ∈ ∈ ∈ × × (a) x, y + z = x, y + x, z matrix A∗ such that (A∗)ij = Aji for all i, j. h i h i h i (b) x, cy = c x, y h i h i Example (c) x, 0 = 0, x = 0 h i h i The Frobenius inner product on V = Mn n(F ) is defined by (d) x, x = 0 if and only if x = 0 × h i A, B = tr(B∗A) for A, B V.
    [Show full text]
  • Arxiv:Math/0702621V1
    A GLOBALLY CONVERGENT FLOW FOR COMPUTING THE BEST LOW RANK APPROXIMATION OF A MATRIX KENNETH R. DRIESSEL arXiv:math/0702621v1 [math.OC] 21 Feb 2007 Date: May 7, 2017. 2000 Mathematics Subject Classification. Primary: 15A03 Vector spaces,linear depen- dence, rank; Secondary: 15A18 Eigenvalues, singular values, and eigenvectors. Key words and phrases. equivalence, singular values, gradient, flow, quasi-projection, Eckart-Young theorem, approximation, Riemannian manifold. 1 2 KENNETHR.DRIESSEL Abstract We work in the space of m-by-n real matrices with the Frobenius inner product. Consider the following problem: Problem: : Given an m-by-n real matrix A and a positive integer k, find the m-by-n matrix with rank k that is closest to A. I discuss a rank-preserving differential equation (d.e.) which solves this problem. If X(t) is a solution of this d.e., then the distance between X(t) and A decreases as t increases; this distance function is a Lyapunov func- tion for the d.e. If A has distinct positive singular values (which is a generic condition) then this d.e. has only one stable equilibrium point. The other equilibrium points are finite in number and unstable. In other words, the basin of attraction of the stable equilibrium point on the manifold of ma- trices with rank k consists of almost all matrices. This special equilibrium point is the solution of the given problem. Usually constrained optimization problems have many local minimums (most of which are undesirable). So the constrained optimization problem considered here is very special. Table of Contents • Introduction • Setting Up the Differential Equation • Properties of the Differential Equation • Acknowledgements • Appendix: The Frobenius Inner Product • References LOW RANK APPROXIMATION 3 1.
    [Show full text]
  • Lecture Notes on Matrix Analysis
    Lecture notes on matrix analysis Mark W. Meckes April 27, 2019 Contents 1 Linear algebra background 3 1.1 Fundamentals . .3 1.2 Matrices and linear maps . .4 1.3 Rank and eigenvalues . .6 2 Matrix factorizations 7 2.1 SVD: The fundamental theorem of matrix analysis . .7 2.2 A first application: the Moore{Penrose inverse . .9 2.3 The spectral theorems and polar decomposition . 10 2.4 Factorizations involving triangular matrices . 14 2.5 Simultaneous factorizations . 18 3 Eigenvalues of Hermitian matrices 20 3.1 Variational formulas . 20 3.2 Inequalities for eigenvalues of two Hermitian matrices . 22 3.3 Majorization . 26 4 Norms 31 4.1 Vector norms . 31 4.2 Special classes of norms . 33 4.3 Duality . 35 4.4 Matrix norms . 37 4.5 The spectral radius . 40 4.6 Unitarily invariant norms . 43 4.7 Duality for matrix norms . 48 5 Some topics in solving linear systems 50 5.1 Condition numbers . 50 5.2 Sparse signal recovery . 52 6 Positive (semi)definite matrices 55 6.1 Characterizations . 55 6.2 Kronecker and Hadamard products . 57 6.3 Inequalities for positive (semi)definite matrices . 58 1 7 Locations and perturbations of eigenvalues 60 7.1 The Gerˇsgorincircle theorem . 60 7.2 Eigenvalue perturbations for non-Hermitian matrices . 61 8 Nonnegative matrices 64 8.1 Inequalities for the spectral radius . 64 8.2 Perron's theorem . 67 8.3 Irreducible nonnegative matrices . 71 8.4 Stochastic matrices and Markov chains . 73 8.5 Reversible Markov chains . 76 8.6 Convergence rates for Markov chains .
    [Show full text]