Efficient Diagonalization of Symmetric Matrices Associated with Graphs Of
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Math 221: LINEAR ALGEBRA
Math 221: LINEAR ALGEBRA Chapter 8. Orthogonality §8-3. Positive Definite Matrices Le Chen1 Emory University, 2020 Fall (last updated on 11/10/2020) Creative Commons License (CC BY-NC-SA) 1 Slides are adapted from those by Karen Seyffarth from University of Calgary. Positive Definite Matrices Cholesky factorization – Square Root of a Matrix Positive Definite Matrices Definition An n × n matrix A is positive definite if it is symmetric and has positive eigenvalues, i.e., if λ is a eigenvalue of A, then λ > 0. Theorem If A is a positive definite matrix, then det(A) > 0 and A is invertible. Proof. Let λ1; λ2; : : : ; λn denote the (not necessarily distinct) eigenvalues of A. Since A is symmetric, A is orthogonally diagonalizable. In particular, A ∼ D, where D = diag(λ1; λ2; : : : ; λn). Similar matrices have the same determinant, so det(A) = det(D) = λ1λ2 ··· λn: Since A is positive definite, λi > 0 for all i, 1 ≤ i ≤ n; it follows that det(A) > 0, and therefore A is invertible. Theorem A symmetric matrix A is positive definite if and only if ~xTA~x > 0 for all n ~x 2 R , ~x 6= ~0. Proof. Since A is symmetric, there exists an orthogonal matrix P so that T P AP = diag(λ1; λ2; : : : ; λn) = D; where λ1; λ2; : : : ; λn are the (not necessarily distinct) eigenvalues of A. Let n T ~x 2 R , ~x 6= ~0, and define ~y = P ~x. Then ~xTA~x = ~xT(PDPT)~x = (~xTP)D(PT~x) = (PT~x)TD(PT~x) = ~yTD~y: T Writing ~y = y1 y2 ··· yn , 2 y1 3 6 y2 7 ~xTA~x = y y ··· y diag(λ ; λ ; : : : ; λ ) 6 7 1 2 n 1 2 n 6 . -
Handout 9 More Matrix Properties; the Transpose
Handout 9 More matrix properties; the transpose Square matrix properties These properties only apply to a square matrix, i.e. n £ n. ² The leading diagonal is the diagonal line consisting of the entries a11, a22, a33, . ann. ² A diagonal matrix has zeros everywhere except the leading diagonal. ² The identity matrix I has zeros o® the leading diagonal, and 1 for each entry on the diagonal. It is a special case of a diagonal matrix, and A I = I A = A for any n £ n matrix A. ² An upper triangular matrix has all its non-zero entries on or above the leading diagonal. ² A lower triangular matrix has all its non-zero entries on or below the leading diagonal. ² A symmetric matrix has the same entries below and above the diagonal: aij = aji for any values of i and j between 1 and n. ² An antisymmetric or skew-symmetric matrix has the opposite entries below and above the diagonal: aij = ¡aji for any values of i and j between 1 and n. This automatically means the digaonal entries must all be zero. Transpose To transpose a matrix, we reect it across the line given by the leading diagonal a11, a22 etc. In general the result is a di®erent shape to the original matrix: a11 a21 a11 a12 a13 > > A = A = 0 a12 a22 1 [A ]ij = A : µ a21 a22 a23 ¶ ji a13 a23 @ A > ² If A is m £ n then A is n £ m. > ² The transpose of a symmetric matrix is itself: A = A (recalling that only square matrices can be symmetric). -
On the Eigenvalues of Euclidean Distance Matrices
“main” — 2008/10/13 — 23:12 — page 237 — #1 Volume 27, N. 3, pp. 237–250, 2008 Copyright © 2008 SBMAC ISSN 0101-8205 www.scielo.br/cam On the eigenvalues of Euclidean distance matrices A.Y. ALFAKIH∗ Department of Mathematics and Statistics University of Windsor, Windsor, Ontario N9B 3P4, Canada E-mail: [email protected] Abstract. In this paper, the notion of equitable partitions (EP) is used to study the eigenvalues of Euclidean distance matrices (EDMs). In particular, EP is used to obtain the characteristic poly- nomials of regular EDMs and non-spherical centrally symmetric EDMs. The paper also presents methods for constructing cospectral EDMs and EDMs with exactly three distinct eigenvalues. Mathematical subject classification: 51K05, 15A18, 05C50. Key words: Euclidean distance matrices, eigenvalues, equitable partitions, characteristic poly- nomial. 1 Introduction ( ) An n ×n nonzero matrix D = di j is called a Euclidean distance matrix (EDM) 1, 2,..., n r if there exist points p p p in some Euclidean space < such that i j 2 , ,..., , di j = ||p − p || for all i j = 1 n where || || denotes the Euclidean norm. i , ,..., Let p , i ∈ N = {1 2 n}, be the set of points that generate an EDM π π ( , ,..., ) D. An m-partition of D is an ordered sequence = N1 N2 Nm of ,..., nonempty disjoint subsets of N whose union is N. The subsets N1 Nm are called the cells of the partition. The n-partition of D where each cell consists #760/08. Received: 07/IV/08. Accepted: 17/VI/08. ∗Research supported by the Natural Sciences and Engineering Research Council of Canada and MITACS. -
Chapter 0 Preliminaries
Chapter 0 Preliminaries Objective of the course Introduce basic matrix results and techniques that are useful in theory and applications. It is important to know many results, but there is no way I can tell you all of them. I would like to introduce techniques so that you can obtain the results you want. This is the standard teaching philosophy of \It is better to teach students how to fish instead of just giving them fishes.” Assumption You are familiar with • Linear equations, solution sets, elementary row operations. • Matrices, column space, row space, null space, ranks, • Determinant, eigenvalues, eigenvectors, diagonal form. • Vector spaces, basis, change of bases. • Linear transformations, range space, kernel. • Inner product, Gram Schmidt process for real matrices. • Basic operations of complex numbers. • Block matrices. Notation • Mn(F), Mm;n(F) are the set of n × n and m × n matrices over F = R or C (or a more general field). n • F is the set of column vectors of length n with entries in F. • Sometimes we write Mn;Mm;n if F = C. A1 0 • If A = 2 Mm+n(F) with A1 2 Mm(F) and A2 2 Mn(F), we write A = A1 ⊕ A2. 0 A2 • xt;At denote the transpose of a vector x and a matrix A. • For a complex matrix A, A denotes the matrix obtained from A by replacing each entry by its complex conjugate. Furthermore, A∗ = (A)t. More results and examples on block matrix multiplications. 1 1 Similarity, Jordan form, and applications 1.1 Similarity Definition 1.1.1 Two matrices A; B 2 Mn are similar if there is an invertible S 2 Mn such that S−1AS = B, equivalently, A = T −1BT with T = S−1. -
Rotation Matrix - Wikipedia, the Free Encyclopedia Page 1 of 22
Rotation matrix - Wikipedia, the free encyclopedia Page 1 of 22 Rotation matrix From Wikipedia, the free encyclopedia In linear algebra, a rotation matrix is a matrix that is used to perform a rotation in Euclidean space. For example the matrix rotates points in the xy -Cartesian plane counterclockwise through an angle θ about the origin of the Cartesian coordinate system. To perform the rotation, the position of each point must be represented by a column vector v, containing the coordinates of the point. A rotated vector is obtained by using the matrix multiplication Rv (see below for details). In two and three dimensions, rotation matrices are among the simplest algebraic descriptions of rotations, and are used extensively for computations in geometry, physics, and computer graphics. Though most applications involve rotations in two or three dimensions, rotation matrices can be defined for n-dimensional space. Rotation matrices are always square, with real entries. Algebraically, a rotation matrix in n-dimensions is a n × n special orthogonal matrix, i.e. an orthogonal matrix whose determinant is 1: . The set of all rotation matrices forms a group, known as the rotation group or the special orthogonal group. It is a subset of the orthogonal group, which includes reflections and consists of all orthogonal matrices with determinant 1 or -1, and of the special linear group, which includes all volume-preserving transformations and consists of matrices with determinant 1. Contents 1 Rotations in two dimensions 1.1 Non-standard orientation -
Publications of Roger Horn
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Linear Algebra and its Applications 424 (2007) 3–7 www.elsevier.com/locate/laa Publications of Roger Horn 1. A heuristic asymptotic formula concerning the distribution of prime numbers (with P.T. Bateman), Math. Comp. 16 (1962), 363–367 (MR26#6139). 2. Primes represented by irreducible polynomials in one variable, Theory of Numbers (with P.T. Bateman), In: Proceedings of Symposia in Pure Mathematics, vol. VIII, American Mathematical Society, Providence, Rhode Island, 1965, pp. 119–132 (MR31#1234). 3. On boundary values of a Schlicht mapping, Proc. Amer. Math. Soc. 18 (1967) 782–787 (MR36#2792). 4. On infinitely divisible matrices, kernels, and functions, Z. Wahrscheinlichkeitstheorie verw. Gebiete 8 (1967) 219–230 (MR36#925). 5. The theory of infinitely divisible matrices and kernels, Trans. Amer. Math. Soc. 136 (1969) 269–286 (MR41#9327). 6. Infinitely divisible positive definite sequences, Trans. Amer. Math. Soc. 136 (1969) 287–303 (MR58#17681). 7. On a functional equation arising in probability (with R.D. Meredith), Amer. Math. Monthly 76 (1969) 802–804 (MR40#4622). 8. On the Wronskian test for independence, Amer. Math. Monthly 77 (1970) 65–66. 9. On certain power series and sequences, J. London Math. Soc. 2 (2) (1970) 160–162 (MR41#5629). 10. On moment sequences and renewal sequences, J. Math. Anal. Appl. 31 (1970) 130–135 (MR42#6541). 11. On Fenchel’s theorem, Amer. Math. Monthly 78 (1971) 380–381 (MR44#2142). 12. Schlicht mappings and infinitely divisible kernels, Pacific J. -
Matrices and Matrix Operations with TI-Nspire™ CAS
Matrices and Matrix Operations with TI-Nspire™ CAS Forest W. Arnold June 2020 Typeset in LATEX. Copyright © 2020 Forest W. Arnold This work is licensed under the Creative Commons Attribution-Noncommercial-Share Alike 4.0 International License. To view a copy of this license, visit http://creativecommons. org/licenses/by-nc-sa/4.0/legalcode/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA. You can use, print, duplicate, share this work as much as you want. You can base your own work on it and reuse parts if you keep the license the same. Trademarks TI-Nspire is a registered trademark of Texas Instruments, Inc. Attribution Most of the examples in this article are from A First Course in Linear Algebra an Open Text by Lyrix Learning, base textbook version 2017 - revision A, by K. Kuttler. The text is licensed under the Creative Commons License (CC BY) and is available for download at the link https://lyryx.com/first-course-linear-algebra/. 1 Introduction The article Systems of Equations with TI-Nspire™ CAS: Matrices and Gaussian Elim- ination described how to represent and solve systems of linear equations with matrices and elementary row operations. By defining arithmetic and algebraic operations with matrices, applications of matrices can be expanded to include more than simply solving linear systems. This article describes and demonstrates how to use TI-Nspire’s builtin matrix functions to • add and subtract matrices, • multiply matrices by scalars, • multiply matrices, • transpose matrices, • find matrix inverses, and • use matrix equations. The TI-Nspire examples in this article require the CAS version of TI-Nspire. -
Linear Algebra (XXVIII)
Linear Algebra (XXVIII) Yijia Chen 1. Quadratic Forms Definition 1.1. Let A be an n × n symmetric matrix over K, i.e., A = aij i,j2[n] where aij = aji 0 1 x1 B . C for all i, j 2 [n]. Then for the variable vector x = @ . A xn 0 1 0 1 a11 ··· a1n x1 0 x ··· x B . .. C B . C x Ax = 1 n @ . A @ . A an1 ··· ann xn = aij · xi · xj i,Xj2[n] 2 = aii · xi + 2 aij · xi · xj. 1 i<j n iX2[n] 6X6 is a quadratic form. Definition 1.2. A quadratic form x0Ax is diagonal if the matrix A is diagonal. Definition 1.3. Let A and B be two n × n-matrices. If there exists an invertible C with B = C0AC, then B is congruent to A. Lemma 1.4. The matrix congruence is an equivalence relation. More precisely, for all n × n-matrices A, B, and C, (i) A is congruent to A, (ii) if A is congruent to B, then B is congruent to A as well, and (iii) if A is congruent to B, B is congruent to C, then A is congruent to C. Lemma 1.5. Let A and B be two n × n-matrices. If B is obtained from A by the following elementary operations, then B is congruent to A. (i) Switch the i-th and j-th rows, then switch the i-th and j-th columns, where 1 6 i < j 6 j 6 n. (ii) Multiply the i-th row by k, then multiply the i-th column by k, where i 2 [n] and k 2 K n f0g. -
SYMPLECTIC GROUPS MATHEMATICAL SURVEYS' Number 16
SYMPLECTIC GROUPS MATHEMATICAL SURVEYS' Number 16 SYMPLECTIC GROUPS BY O. T. O'MEARA 1978 AMERICAN MATHEMATICAL SOCIETY PROVIDENCE, RHODE ISLAND TO JEAN - my wild Irish rose Library of Congress Cataloging in Publication Data O'Meara, O. Timothy, 1928- Symplectic groups. (Mathematical surveys; no. 16) Based on lectures given at the University of Notre Dame, 1974-1975. Bibliography: p. Includes index. 1. Symplectic groups. 2. Isomorphisms (Mathematics) I. Title. 11. Series: American Mathematical Society. Mathematical surveys; no. 16. QA171.046 512'.2 78-19101 ISBN 0-8218-1516-4 AMS (MOS) subject classifications (1970). Primary 15-02, 15A33, 15A36, 20-02, 20B25, 20045, 20F55, 20H05, 20H20, 20H25; Secondary 10C30, 13-02, 20F35, 20G15, 50025,50030. Copyright © 1978 by the American Mathematical Society Printed in the United States of America AIl rights reserved except those granted to the United States Government. Otherwise, this book, or parts thereof, may not be reproduced in any form without permission of the publishers. CONTENTS Preface....................................................................................................................................... ix Prerequisites and Notation...................................................................................................... xi Chapter 1. Introduction 1.1. Alternating Spaces............................................................................................. 1 1.2. Projective Transformations................................................................................ -
Real Symmetric Matrices
Chapter 2 Real Symmetric Matrices 2.1 Special properties of real symmetric matrices A matrixA M n(C) (orM n(R)) is diagonalizable if it is similar to a diagonal matrix. If this ∈ happens, it means that there is a basis ofC n with respect to which the linear transformation ofC n defined by left multiplication byA has a diagonal matrix. Every element of such a basis is simply multiplied by a scalar when it is multiplied byA, which means exactly that the basis consists of eigenvectors ofA. n Lemma 2.1.1. A matrixA M n(C) is diagonalizable if and only ifC possesses a basis consisting of ∈ eigenvectors ofA. 1 1 Not all matrices are diagonalizable. For exampleA= is not. To see this note that 1 0 1 � � (occurring twice) is the only eigenvalue ofA, but that all eigenvectors ofA are scalar multiples of 1 2 2 0 , soC (orR ) does not contain a basis consisting of eigenvectors ofA, andA is not similar to a diagonal matrix. � � We note that a matrix can fail to be diagonalizable only if it has repeated eigenvalues, as the following lemma shows. Lemma 2.1.2. LetA M n(R) and suppose thatA has distinct eigenvaluesλ 1,...,λ n inC. ThenA is ∈ diagonalizable. Proof. Letv i be an eigenvector ofA corresponding toλ i. We will show thatS={v 1,...,v n} is a linearly independent set. Sincev 1 is not the zero vector, we know that{v 1} is linearly independent. IfS is linearly dependent, letk be the least for which{v 1,...,v k} is linearly dependent. -
Notes on Symmetric Matrices 1 Symmetric Matrices
CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices. All matrices that we discuss are over the real numbers. Let A be a real, symmetric matrix of size d × d and let I denote the d × d identity matrix. Perhaps the most important and useful property of symmetric matrices is that their eigenvalues behave very nicely. Definition 1 Let U be a d × d matrix. The matrix U is called an orthogonal matrix if U TU = I. This implies that UU T = I, by uniqueness of inverses. Fact 2 (Spectral Theorem). Let A be any d × d symmetric matrix. There exists an orthogonal matrix U and a (real) diagonal matrix D such that A = UDU T: This is called a spectral decomposition of A. Let ui be the ith column of U and let λi denote the ith diagonal entry of D. Then fu1; : : : ; udg is an orthonormal basis consisting of eigenvectors of A, and λi is the eigenvalue corresponding to ui. We can also write d X T A = λiuiui : (1) i=1 The eigenvalues λ are uniquely determined by A, up to reordering. Caution. The product of two symmetric matrices is usually not symmetric. 1.1 Positive semi-definite matrices Definition 3 Let A be any d × d symmetric matrix. The matrix A is called positive semi-definite if all of its eigenvalues are non-negative. This is denoted A 0, where here 0 denotes the zero matrix. -
On the Uniqueness of Euclidean Distance Matrix Completions Abdo Y
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Linear Algebra and its Applications 370 (2003) 1–14 www.elsevier.com/locate/laa On the uniqueness of Euclidean distance matrix completions Abdo Y. Alfakih Bell Laboratories, Lucent Technologies, Room 3J-310, 101 Crawfords Corner Road, Holmdel, NJ 07733-3030, USA Received 28 December 2001; accepted 18 December 2002 Submitted by R.A. Brualdi Abstract The Euclidean distance matrix completion problem (EDMCP) is the problem of determin- ing whether or not a given partial matrix can be completed into a Euclidean distance matrix. In this paper, we present necessary and sufficient conditions for a given solution of the EDMCP to be unique. © 2003 Elsevier Inc. All rights reserved. AMS classification: 51K05; 15A48; 52A05; 52B35 Keywords: Matrix completion problems; Euclidean distance matrices; Semidefinite matrices; Convex sets; Singleton sets; Gale transform 1. Introduction All matrices considered in this paper are real. An n × n matrix D = (dij ) is said to be a Euclidean distance matrix (EDM) iff there exist points p1,p2,...,pn in some i j 2 Euclidean space such that dij =p − p for all i, j = 1,...,n. It immediately follows that if D is EDM then D is symmetric with positive entries and with zero diagonal. We say matrix A = (aij ) is symmetric partial if only some of its entries are specified and aji is specified and equal to aij whenever aij is specified. The unspecified entries of A are said to be free. Given an n × n symmetric partial matrix A,ann × n matrix D is said to be a EDM completion of A iff D is EDM and Email address: [email protected] (A.Y.