Eigenvalues Rho1 = Rho; Rho = R' * Y; If (I == 1) Example 5.0.1 (Analytic Solution of Homogeneous Linear Ordinary Differential Equations)

Total Page:16

File Type:pdf, Size:1020Kb

Eigenvalues Rho1 = Rho; Rho = R' * Y; If (I == 1) Example 5.0.1 (Analytic Solution of Homogeneous Linear Ordinary Differential Equations) MATLAB PCG algorithm function x = pcg(Afun,b,tol,maxit,Binvfun,x0) x = x0; r = b - feval(Afun,x); rho = 1; for i = 1 : maxit y = feval(Binvfun,r); 5 Eigenvalues rho1 = rho; rho = r' * y; if (i == 1) Example 5.0.1 (Analytic solution of homogeneous linear ordinary differential equations). → [47, p = y; else Remark 5.6.1] beta = rho / rho1; p = y + beta * p; Autonomous homogeneous linear ordinary differential equation (ODE): end q = feval(Afun,p); y˙ = Ay , A ∈ Cn,n . (5.0.1) alpha = rho /(p' * q); x = x + alpha * p; λ 1 if (norm(b - evalf(Afun,x)) <= tolb*norm(b)) , return; end 1 z=S− y A = S ... S−1 , S ∈ Cn,n regular =⇒ y˙ = Ay ←→ z˙ = Dz . r = r - alpha * q; end λn =:D Dubious termination criterion ! | {z } ➣ solution of initial value problem: △ n −1 4º¿ y˙ = Ay , y(0) = y0 ∈ C ⇒ y(t) = Sz(t) , z˙ = Dz , z(0) = S y0 . 5º¼ Summary: Ôº ¾77 Ôº ¾79 Advantages of Krylov methods vs. direct elimination (, IF they converge at all/sufficiently fast). The initial value problem for the decoupled homogeneous linear ODE z˙ = Dz has a simple analytic • They require system matrix A in procedural form y=evalA(x) ↔ y = Ax only. solution −1 T • They can perfectly exploit sparsity of system matrix. zi(t) = exp(λit)(z0)i = exp(λit) (S )i,:y0 . • They can cash in on low accuracy requirements (, IF viable termination criterion available). In light of Rem. ??: • They can benefit from a good initial guess. λ1 A = S ... S−1 ⇔ A (S) = λ (S) i = 1,..., n . (5.0.2) :,i i :,i λn In order to find the transformation matrix S all non-zero solution vectors (= eigenvectors) x ∈ Cn of 4.4 Essential Skills Learned in Chapter 4 the linear eigenvalue problem Ax = λx You should know: have to be found. • the relation between a linear system of equations with a s.p.d. matrix and a quadratic minimization problem 3 • how the steepest descendent method works and its convergence properties Example 5.0.2 (Normal mode analysis). → [?] • the idea behind the conjugate gradient method and its convergence properties In the computation of the IR spectra of a molecule one is interested into the vibrational frequencies • how to use the Matlab-function pcg n 4º4 of a molecule which is described by n positional degrees of freedom x ∈ R and corresponding 5º¼ the importance of the preconditioner dx n • Ôº ¾78 R Ôº ¾8¼ velocities x˙ = dt ∈ . m 2 6.In case of different masses we end with the system Suppose all masses are equal with an effective mass m kinetic energy K(x, x˙ ) = 2 kx˙ k2 Ma¨ = −Ha . Model for total potential energy U(x) with the mass matrix M which is symmetric, positiv-definite, but not necessarily diagonal. We 1. find a local minimum x∗ of the potential energy: thus must perform normal mode analysis on the matrix M−1H: ∗ 2 ∗ DU(x ) = 0 , H = D U(x ) sym. pos. semi-def. , −1 j j j j M Hw = λjw ⇐⇒ Hw = λjMw . hence with Taylor we have near the local minimum: find a local minimum x∗ of the potential 3 energy: 1 U(x∗ + a) = U(x∗) + aT Ha . 2 2. Newton’mechanics near the local minimum: 5.1 Theory of eigenvalue problems ∗ mx¨ = ma¨ = −DaU(x + a) . As we are around the minimum: ∗ 1 T ma¨ = −Da(U(x ) + a Ha) = −Ha . 2 Definition 5.1.1 (Eigenvalues and eigenvectors). • λ ∈ C eigenvalue (ger.: Eigenwert) of A ∈ Kn,n :⇔ det(λI − A) = 0 characteristic polynomial χ(λ) 5º¼ 5º½ • spectrum of A ∈ Kn,n: σ(A) := {λ ∈ C: λ eigenvalue of|A} {z } Ôº ¾8½ Ôº ¾8¿ 3. As H is real and symmetric, its eigenvectors wj, with j = 1,..., n are orthogonal and hence eigenspace (ger.: Eigenraum) associated with eigenvalue : they form a conveninent basis for representing any vector: Two simple facts: 1 n a = c1(t)w + ... + cn(t)w . λ ∈ σ(A) ⇒ dim EigA(λ) > 0 , (5.1.1) Inserting into the system of second-order ODEs, we get: det(A) = det(AT ) ∀A ∈ Kn,n ⇒ σ(A) = σ(AT ) . (5.1.2) m(c¨ w1 + ... + c¨ wn) = −(c Hw1 + ... + c Hwn) = −(λ w1 + ... + λ wn) 1 n 1 n 1 n ✎ notation: ρ(A) := max{|λ|: λ ∈ σ(A)} =ˆ spectral radius of A ∈ Kn,n j j where we denoted the associated eigenvalues by λj: Hw = λjw . 4. Taking the scalar product with the eigenvector k we obtain an uncoupled set of equations for ' $ w Theorem 5.1.2 (Bound for spectral radius). each , : ck(t) k = 1,..., n For any matrix norm k·k induced by a vector norm (→ Def. 2.4.2) mc¨k = −λkck . ρ(A) ≤ kAk . & % 5. Looking for solutions of the form ck = αk sin(ωkt) and substituting it into the differential equation one get the angular vibrational frequency of the normal mode ωk = λk/m ' $ Lemma 5.1.3 (Gershgorin circle theorem). For any A ∈ Kn,n holds true p n σ(A) ⊂ {z ∈ C: |z − a | ≤ |a |} . jj i6=j ji 5º¼ j[=1 X 5º½ Ôº ¾8¾ & %Ôº ¾84 ' $ Eigenvalue Lemma 5.1.4 (Similarity and spectrum). problems: ➊ Given A ∈ Kn,n find all eigenvalues (= spectrum of A). (EVPs) The spectrum of a matrix is invariant with respect to similarity transformations: ➋ Given A ∈ Kn,n find σ(A) plus all eigenvectors. ➌ Given A Kn,n find a few eigenvalues and associated eigenvectors ∀A ∈ Kn,n: σ(S−1AS) = σ(A) ∀ regular S ∈ Kn,n . ∈ & % (Linear) generalized eigenvalue problem: ' $ Given A ∈ Cn,n, regular B ∈ Cn,n, seek x 6= 0, λ ∈ C Lemma 5.1.5. Existence of a one-dimensional invariant subspace Ax = λBx ⇔ B−1Ax = λx . (5.1.3) ∀C ∈ Cn,n: ∃u ∈ Cn: C (Span {u}) ⊂ Span {u} . x =ˆ generalized eigenvector, λ =ˆ generalized eigenvalue & % Obviously every generalized eigenvalue problem is equivalent to a standard eigenvalue problem ' $ −1 Theorem 5.1.6 (Schur normal form). Ax = λBx ⇔ B A = λx . However, usually it is not advisable to use this equivalence for numerical purposes! ∀A ∈ Kn,n: ∃U ∈ Cn,n unitary: UHAU = T with T ∈ Cn,n upper triangular . Remark 5.1.1 (Generalized eigenvalue problems and Cholesky factorization). & % If B = BH s.p.d. (→ Def. 2.6.2) with Cholesky factorization B = RHR 5º½ 5º½ Ax = λBx ⇔ Ay = λy where A := R−HAR−1, y := Rx . Ôº ¾85 Ôº ¾87 ➞ e e ' $ This transformation can be used for efficient computations. Corollary 5.1.7 (Principal axis transformation). △ n,n H H n,n H A ∈ K , AA = A A: ∃U ∈ C unitary: U AU = diag(λ1,..., λn) , λi ∈ C . & % A matrix A ∈ Kn,n with AAH = AHA is called normal. 5.2 “Direct” Eigensolvers • Hermitian matrices: AH = A ➤ σ(A) ⊂ R Examples of normal matrices are • unitary matrices: AH = A−1 ➤ |σ(A)| = 1 Purpose: solution of eigenvalue problems ➊, ➋ for dense matrices “up to machine precision” • skew-Hermitian matrices: A = −AH ➤ σ(A) ⊂ iR MATLAB-function: eig ➤ Normal matrices can be diagonalized by unitary similarity transformations n,n d = eig(A) : computes spectrum σ(A) = {d1,..., dn} of A ∈ C [V,D] = eig(A) : computes V ∈ Cn,n, diagonal D ∈ Cn,n such that AV = VD Symmetric real matrices can be diagonalized by orthogonal similarity transformations Remark 5.2.1 (QR-Algorithm). → [20, Sect. 7.5] In Thm. 5.1.7: – λ1,..., λn = eigenvalues of A Note: All “direct” eigensolvers are iterative methods – Columns of U = orthonormal basis of eigenvectors of A 5º½ 5º¾ Ôº ¾86 Ôº ¾88 d = eig(A,B) : computes all generalized eigenvalues Idea: Iteration based on successive unitary similarity transformations [V,D] = eig(A,B) : computes V ∈ Cn,n, diagonal D ∈ Cn,n such that AV = BVD diagonal matrix , if A = AH , (0) (1) Note: (Generalized) eigenvectors can be recovered as columns of V: A = A −→ A −→ ... −→ upper triangular matrix , else. (→ Thm. 5.1.6) AV = VD ⇔ A(V):,i = (D)i,iV:,i , (superior stability of unitary transformations, see Rem. ??) if D = diag(d1,..., dn). Code 5.2.2: QR-algorithm with shift QR-algorithm (with shift) 1 function d = e i g q r (A, t o l ) Remark 5.2.4 (Computational effort for eigenvalue computations). 2 n = size (A, 1 ) ; in general: quadratic conver- 3 while (norm( t r i l (A,−1)) > t o l ∗norm(A)) Computational effort (#elementary operations) for eig(): gence 4 s h i f t = A( n , n ) ; cubic convergence for normal 5 [Q,R] = qr ( A − s h i f t ∗ eye ( n )) ; eigenvalues & eigenvectors of A ∈ Kn,n ∼ 25n3 + O(n2) matrices 6 A = Q’ ∗A∗Q; only eigenvalues of A ∈ Kn,n ∼ 10n3 + O(n2) ( [20, Sect. 7.5,8.2]) 7 end → eigenvalues and eigenvectors A AH Kn,n 3 2 3 8 d = diag (A) ; = ∈ ∼ 9n + O(n ) O(n )! H Kn,n 4 3 2 only eigenvalues of A = A ∈ ∼ 3n + O(n ) ) only eigenvalues of tridiagonal A = AH ∈ Kn,n ∼ 30n2 + O(n) Computational cost: O(n3) operations per step of the QR-algorithm Note: eig not available for sparse matrix arguments # Exception: Library implementations of the QR-algorithm provide numerically stable d=eig(A) for sparse Hermitian matrices eigensolvers (→ Def.??) 5º¾ △ 5º¾ Ôº ¾89 Ôº ¾9½ " ! Example 5.2.5 (Runtimes of eig). △ Code 5.2.6: measuring runtimes of eig Remark 5.2.3 (Unitary similarity transformation to tridiagonal form). 1 function e i g t i m i n g 2 H Successive Householder similarity transformations of A = A : 3 A = rand (500 ,500) ; B = A’ ∗A; (➞ =ˆ affected rows/columns, =ˆ targeted vector) 4 C = gallery ( ’ t r i d i a g ’ ,500 ,1 ,3 ,1) ; 5 times = []; 6 for n=5:5:500 0 0 0 0 0 0 0 0 0 0 7 An = A( 1 : n , 1 : n ) ; Bn = B( 1 : n , 1 : n ) ; Cn = C( 1 : n , 1 : n ) ; 0 0 0 0 0 0 0 8 t1 = 1000; for k =1:3 , t i c ; d = eig (An) ; t1 = min ( t1 , toc ) ; end −→ −→ −→ 0 9 t2 = 1000; for k =1:3 , t i c ;[ V,D] = eig (An) ; t2 = min ( t2 , toc ) ; end 0 t3 = 1000; for k =1:3 , t i c ; d = eig (Bn) ; t3 = min ( t3 , toc ) ; end 0 0 0 0 0 0 1 t4 = 1000; for k =1:3 , t i c ;[ V,D] = eig (Bn) ; t4 = min ( t4 , toc ) ; end transformation to tridiagonal form ! (for general matrices a similar strategy can achieve a 2 t5 = 1000; for k =1:3 , t i c ; d = eig (Cn) ; t5 = min ( t5 , toc ) ; end 3 times = [ times ; n t1 t2 t3 t4 t5 ]; similarity transformation to upper Hessenberg form) 4 end 5 6 figure ; this transformation is used as a preprocessing step for QR-algorithm ➣ eig.
Recommended publications
  • Chapter 2 a Short Review of Matrix Algebra
    Chapter 2 A short review of matrix algebra 2.1 Vectors and vector spaces Definition 2.1.1. A vector a of dimension n is a collection of n elements typically written as ⎛ ⎞ ⎜ a1 ⎟ ⎜ ⎟ ⎜ ⎟ ⎜ a2 ⎟ a = ⎜ ⎟ =(ai)n. ⎜ . ⎟ ⎝ . ⎠ an Vectors of length 2 (two-dimensional vectors) can be thought of points in 33 BIOS 2083 Linear Models Abdus S. Wahed the plane (See figures). Chapter 2 34 BIOS 2083 Linear Models Abdus S. Wahed Figure 2.1: Vectors in two and three dimensional spaces (-1.5,2) (1, 1) (1, -2) x1 (2.5, 1.5, 0.95) x2 (0, 1.5, 0.95) x3 Chapter 2 35 BIOS 2083 Linear Models Abdus S. Wahed • A vector with all elements equal to zero is known as a zero vector and is denoted by 0. • A vector whose elements are stacked vertically is known as column vector whereas a vector whose elements are stacked horizontally will be referred to as row vector. (Unless otherwise mentioned, all vectors will be referred to as column vectors). • A row vector representation of a column vector is known as its trans- T pose. We will use⎛ the⎞ notation ‘ ’or‘ ’ to indicate a transpose. For ⎜ a1 ⎟ ⎜ ⎟ ⎜ a2 ⎟ ⎜ ⎟ T instance, if a = ⎜ ⎟ and b =(a1 a2 ... an), then we write b = a ⎜ . ⎟ ⎝ . ⎠ an or a = bT . • Vectors of same dimension are conformable to algebraic operations such as additions and subtractions. Sum of two or more vectors of dimension n results in another n-dimensional vector with elements as the sum of the corresponding elements of summand vectors.
    [Show full text]
  • Facts from Linear Algebra
    Appendix A Facts from Linear Algebra Abstract We introduce the notation of vector and matrices (cf. Section A.1), and recall the solvability of linear systems (cf. Section A.2). Section A.3 introduces the spectrum σ(A), matrix polynomials P (A) and their spectra, the spectral radius ρ(A), and its properties. Block structures are introduced in Section A.4. Subjects of Section A.5 are orthogonal and orthonormal vectors, orthogonalisation, the QR method, and orthogonal projections. Section A.6 is devoted to the Schur normal form (§A.6.1) and the Jordan normal form (§A.6.2). Diagonalisability is discussed in §A.6.3. Finally, in §A.6.4, the singular value decomposition is explained. A.1 Notation for Vectors and Matrices We recall that the field K denotes either R or C. Given a finite index set I, the linear I space of all vectors x =(xi)i∈I with xi ∈ K is denoted by K . The corresponding square matrices form the space KI×I . KI×J with another index set J describes rectangular matrices mapping KJ into KI . The linear subspace of a vector space V spanned by the vectors {xα ∈V : α ∈ I} is denoted and defined by α α span{x : α ∈ I} := aαx : aα ∈ K . α∈I I×I T Let A =(aαβ)α,β∈I ∈ K . Then A =(aβα)α,β∈I denotes the transposed H matrix, while A =(aβα)α,β∈I is the adjoint (or Hermitian transposed) matrix. T H Note that A = A holds if K = R . Since (x1,x2,...) indicates a row vector, T (x1,x2,...) is used for a column vector.
    [Show full text]
  • On the Distance Spectra of Graphs
    On the distance spectra of graphs Ghodratollah Aalipour∗ Aida Abiad† Zhanar Berikkyzy‡ Jay Cummings§ Jessica De Silva¶ Wei Gaok Kristin Heysse‡ Leslie Hogben‡∗∗ Franklin H.J. Kenter†† Jephian C.-H. Lin‡ Michael Tait§ October 5, 2015 Abstract The distance matrix of a graph G is the matrix containing the pairwise distances between vertices. The distance eigenvalues of G are the eigenvalues of its distance matrix and they form the distance spectrum of G. We determine the distance spectra of halved cubes, double odd graphs, and Doob graphs, completing the determination of distance spectra of distance regular graphs having exactly one positive distance eigenvalue. We characterize strongly regular graphs having more positive than negative distance eigenvalues. We give examples of graphs with few distinct distance eigenvalues but lacking regularity properties. We also determine the determinant and inertia of the distance matrices of lollipop and barbell graphs. Keywords. distance matrix, eigenvalue, distance regular graph, Kneser graph, double odd graph, halved cube, Doob graph, lollipop graph, barbell graph, distance spectrum, strongly regular graph, optimistic graph, determinant, inertia, graph AMS subject classifications. 05C12, 05C31, 05C50, 15A15, 15A18, 15B48, 15B57 1 Introduction The distance matrix (G) = [d ] of a graph G is the matrix indexed by the vertices of G where d = d(v , v ) D ij ij i j is the distance between the vertices vi and vj , i.e., the length of a shortest path between vi and vj . Distance matrices were introduced in the study of a data communication problem in [16]. This problem involves finding appropriate addresses so that a message can move efficiently through a series of loops from its origin to its destination, choosing the best route at each switching point.
    [Show full text]
  • Spectra of Graphs
    Spectra of graphs Andries E. Brouwer Willem H. Haemers 2 Contents 1 Graph spectrum 11 1.1 Matricesassociatedtoagraph . 11 1.2 Thespectrumofagraph ...................... 12 1.2.1 Characteristicpolynomial . 13 1.3 Thespectrumofanundirectedgraph . 13 1.3.1 Regulargraphs ........................ 13 1.3.2 Complements ......................... 14 1.3.3 Walks ............................. 14 1.3.4 Diameter ........................... 14 1.3.5 Spanningtrees ........................ 15 1.3.6 Bipartitegraphs ....................... 16 1.3.7 Connectedness ........................ 16 1.4 Spectrumofsomegraphs . 17 1.4.1 Thecompletegraph . 17 1.4.2 Thecompletebipartitegraph . 17 1.4.3 Thecycle ........................... 18 1.4.4 Thepath ........................... 18 1.4.5 Linegraphs .......................... 18 1.4.6 Cartesianproducts . 19 1.4.7 Kronecker products and bipartite double. 19 1.4.8 Strongproducts ....................... 19 1.4.9 Cayleygraphs......................... 20 1.5 Decompositions............................ 20 1.5.1 Decomposing K10 intoPetersengraphs . 20 1.5.2 Decomposing Kn into complete bipartite graphs . 20 1.6 Automorphisms ........................... 21 1.7 Algebraicconnectivity . 22 1.8 Cospectralgraphs .......................... 22 1.8.1 The4-cube .......................... 23 1.8.2 Seidelswitching. 23 1.8.3 Godsil-McKayswitching. 24 1.8.4 Reconstruction ........................ 24 1.9 Verysmallgraphs .......................... 24 1.10 Exercises ............................... 25 3 4 CONTENTS 2 Linear algebra 29 2.1
    [Show full text]
  • EXPLICIT BOUNDS for the PSEUDOSPECTRA of VARIOUS CLASSES of MATRICES and OPERATORS Contents 1. Introduction 2 2. Pseudospectra 2
    EXPLICIT BOUNDS FOR THE PSEUDOSPECTRA OF VARIOUS CLASSES OF MATRICES AND OPERATORS FEIXUE GONG1, OLIVIA MEYERSON2, JEREMY MEZA3, ABIGAIL WARD4 MIHAI STOICIU5 (ADVISOR) SMALL 2014 - MATHEMATICAL PHYSICS GROUP N×N Abstract. We study the "-pseudospectra σ"(A) of square matrices A ∈ C . We give a complete characterization of the "-pseudospectrum of any 2×2 matrix and describe the asymptotic behavior (as " → 0) of σ"(A) for any square matrix A. We also present explicit upper and lower bounds for the "-pseudospectra of bidiagonal and tridiagonal matrices, as well as for finite rank operators. Contents 1. Introduction 2 2. Pseudospectra 2 2.1. Motivation and Definitions 2 2.2. Normal Matrices 6 2.3. Non-normal Diagonalizable Matrices 7 2.4. Non-diagonalizable Matrices 8 3. Pseudospectra of 2 2 Matrices 10 3.1. Non-diagonalizable× 2 2 Matrices 10 3.2. Diagonalizable 2 2 Matrices× 12 4. Asymptotic Union of Disks× Theorem 15 5. Pseudospectra of N N Jordan Block 20 6. Pseudospectra of bidiagonal× matrices 22 6.1. Asymptotic Bounds for Periodic Bidiagonal Matrices 22 6.2. Examples of Bidiagonal Matrices 29 7. Tridiagonal Matrices 32 Date: October 30, 2014. 1;2;5Williams College 3Carnegie Mellon University 4University of Chicago. 1 2 GONG, MEYERSON, MEZA, WARD, STOICIU 8. Finite Rank operators 33 References 35 1. Introduction The pseudospectra of matrices and operators is an important mathematical object that has found applications in various areas of mathematics: linear algebra, func- tional analysis, numerical analysis, and differential equations. An overview of the main results on pseudospectra can be found in [14].
    [Show full text]
  • Symmetric Rank-One Updates from Partial Spectrum with an Application to Out-Of-Sample Extension
    Symmetric rank-one updates from partial spectrum with an application to out-of-sample extension Roy Mitz∗ 1, Nir Sharon† 1, and Yoel Shkolnisky‡1 1School of Mathematical Sciences, Tel-Aviv University, Tel-Aviv, Israel Abstract Rank-one update of the spectrum of a matrix is a fundamental problem in classical perturbation theory. In this paper, we consider its variant where only part of the spectrum is known. We address this variant using an efficient scheme for updating the known eigenpairs with guaranteed error bounds. Then, we apply our scheme to the extension of the top eigenvectors of the graph Laplacian to a new data sample. In particular, we model this extension as a perturbation problem and show how to solve it using our rank-one updating scheme. We provide a theoretical analysis of this extension method, and back it up with numerical results that illustrate its advantages. 1 Introduction The last few decades have witnessed the emergence of various algorithms that require the calculation of the eigendecomposition of matrices. A few known examples are Google's PageRank [18], PCA [21], Laplacian eigenmaps [1], LLE [20] and MDS [5]. Since datasets nowadays may contain tens of millions of data points, an efficient calculation of the eigendecomposition becomes fundamental. In many scenarios, only part of the eigendecomposition, i.e., only the leading eigenvalues and eigenvectors, can or need to be calculated. While algorithms for eigendecomposition, such as the Lanczos algorithm and some variants of SVD, are designed especially for this task, they still require a hefty amount of calculations. A natural question that arises in such cases is how to update the eigendecomposition of a matrix given its partial arXiv:1710.02774v4 [math.NA] 8 Jul 2019 eigendecomposition and some \small" perturbation to it, without repeating the entire decomposition again.
    [Show full text]
  • October 30, 2013 SPECTRAL PROPERTIES of SELF-ADJOINT MATRICES
    October 30, 2013 SPECTRAL PROPERTIES OF SELF-ADJOINT MATRICES RODICA D. COSTIN Contents 1. Review 3 1.1. The spectrum of a matrix 3 1.2. Brief overview of previous results 3 1.3. More on similar matrices 4 2. Self-adjoint matrices 4 2.1. Definitions 4 2.2. Self-adjoint matrices are diagonalizable I 5 2.3. Further properties of unitary matrices 7 2.4. An alternative look at the proof of Theorem 16 8 2.5. Triangularization by conjugation using a unitary matrix 8 2.6. All self-adjoint matrices are diagonalizable II 10 2.7. Normal matrices 10 2.8. Generic matrices (or: ”beware of roundo↵errors”) 11 2.9. Anti-self-adjoint (skew-symmetric, skew-Hermitian) matrices 13 2.10. Application to linear di↵erential equations 13 2.11. Diagonalization of unitary matrices 14 3. Quadratic forms and Positive definite matrices 15 3.1. Quadratic forms 15 3.2. Critical points of functions of several variables. 18 3.3. Positive definite matrices 19 3.4. Negative definite, semidefinite and indefinite matrices 20 3.5. Applications to critical points of several variables 21 3.6. Application to di↵erential equations: Lyapunov functions 22 3.7. Solving linear systems by minimization 25 3.8. Generalized eigenvalue problems 26 4. The Rayleigh’s principle and the minimax principle for the eigenvalues of a self-adjoint matrix 28 4.1. The Rayleigh’s quotient 28 4.2. Extrema of the Rayleigh’s quotient 28 4.3. The minimax principle 30 4.4. The minimax principle for the generalized eigenvalue problem.
    [Show full text]
  • Eigenvalue Estimates Via Pseudospectra †
    mathematics Article Eigenvalue Estimates via Pseudospectra † Georgios Katsouleas *,‡, Vasiliki Panagakou ‡ and Panayiotis Psarrakos ‡ Department of Mathematics, Zografou Campus, National Technical University of Athens, 15773 Athens, Greece; [email protected] (V.P.); [email protected] (P.P.) * Correspondence: [email protected] † This paper is dedicated to Mr. Constantin M. Petridi. ‡ These authors contributed equally to this work. × Abstract: In this note, given a matrix A 2 Cn n (or a general matrix polynomial P(z), z 2 C) and 2 f g an arbitrary scalar l0 C, we show how to define a sequence mk k2N which converges to some element of its spectrum. The scalar l0 serves as initial term (m0 = l0), while additional terms are constructed through a recursive procedure, exploiting the fact that each term mk of this sequence is in fact a point lying on the boundary curve of some pseudospectral set of A (or P(z)). Then, the next term in the sequence is detected in the direction which is normal to this curve at the point mk. Repeating the construction for additional initial points, it is possible to approximate peripheral eigenvalues, localize the spectrum and even obtain spectral enclosures. Hence, as a by-product of our method, a computationally cheap procedure for approximate pseudospectra computations emerges. An advantage of the proposed approach is that it does not make any assumptions on the location of the spectrum. The fact that all computations are performed on some dynamically chosen locations on the complex plane which converge to the eigenvalues, rather than on a large number of predefined points on a rigid grid, can be used to accelerate conventional grid algorithms.
    [Show full text]
  • Types of Convergence of Matrices by Olga Pryporova a Dissertation
    Types of convergence of matrices by Olga Pryporova A dissertation submitted to the graduate faculty in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Major: Mathematics Program of Study Committee: Leslie Hogben, Major Professor Maria Axenovich Wolfgang Kliemann Yiu Poon Paul Sacks Iowa State University Ames, Iowa 2009 ii TABLE OF CONTENTS CHAPTER 1. GENERAL INTRODUCTION . 1 1.1 Introduction . 1 1.2 Thesis Organization . 3 1.3 Literature Review . 4 CHAPTER 2. COMPLEX D CONVERGENCE AND DIAGONAL CONVERGENCE OF MATRICES . 10 2.1 Introduction . 10 2.2 When Boundary Convergence Implies Diagonal Convergence . 14 2.3 Examples of 2 × 2 and 3 × 3 Matrices . 24 CHAPTER 3. QUALITATIVE CONVERGENCE OF MATRICES . 27 3.1 Introduction . 27 3.2 Types of Potential Convergence . 30 3.3 Characterization of Potential Absolute and Potential Diagonal Convergence . 34 3.4 Results on Potential Convergence and Potential D Convergence . 35 3.4.1 2 × 2 Modulus Patterns . 38 3.4.2 Some Examples of 3 × 3 Modulus Patterns . 41 CHAPTER 4. GENERAL CONCLUSIONS . 44 4.1 General Discussion . 44 4.2 Recommendations for Future Research . 46 BIBLIOGRAPHY . 47 ACKNOWLEDGEMENTS . 51 1 CHAPTER 1. GENERAL INTRODUCTION In this work properties of the spectra of matrices that play an important role in applications to stability of dynamical systems are studied. 1.1 Introduction A (square) complex matrix is stable (also referred to as Hurwitz stable or continuous time stable) if all its eigenvalues lie in the open left half plane of the complex plane. It is well known (see, for example, [29, p.
    [Show full text]
  • The Normalized Distance Laplacian Received July 1, 2020; Accepted October 17, 2020
    Spec. Matrices 2021; 9:1–18 Research Article Open Access Special Issue: Combinatorial Matrices Carolyn Reinhart* The normalized distance Laplacian https://doi.org/10.1515/spma-2020-0114 Received July 1, 2020; accepted October 17, 2020 Abstract: The distance matrix D(G) of a connected graph G is the matrix containing the pairwise distances between vertices. The transmission of a vertex vi in G is the sum of the distances from vi to all other vertices and T(G) is the diagonal matrix of transmissions of the vertices of the graph. The normalized distance Lapla- cian, DL(G) = I−T(G)−1/2D(G)T(G)−1/2, is introduced. This is analogous to the normalized Laplacian matrix, L(G) = I − D(G)−1/2A(G)D(G)−1/2, where D(G) is the diagonal matrix of degrees of the vertices of the graph and A(G) is the adjacency matrix. Bounds on the spectral radius of DL and connections with the normalized Laplacian matrix are presented. Twin vertices are used to determine eigenvalues of the normalized distance Laplacian. The distance generalized characteristic polynomial is dened and its properties established. Fi- nally, DL-cospectrality and lack thereof are determined for all graphs on 10 and fewer vertices, providing evidence that the normalized distance Laplacian has fewer cospectral pairs than other matrices. Keywords: Normalized Laplacian, Distance Matrices, Cospectrality, Generalized characteristic polynomial MSC: 05C50, 05C12, 15A18 1 Introduction Spectral graph theory is the study of matrices dened in terms of a graph, specically relating the eigenval- ues of the matrix to properties of the graph.
    [Show full text]
  • A Review of Gaussian Random Matrices
    A Review of Gaussian Random Matrices Department of Mathematics, Linköping University Kasper Andersson LiTH-MAT-EX–2020/08–SE Credits: 16hp Level: G2 Supervisor: Martin Singull, Department of Mathematics, Linköping University Examiner: Xiangfeng Yang, Department of Mathematics, Linköping University Linköping: 2020 Abstract While many university students get introduced to the concept of statistics early in their education, random matrix theory (RMT) usually first arises (if at all) in graduate level classes. This thesis serves as a friendly introduction to RMT, which is the study of matrices with entries following some probability distribu- tion. Fundamental results, such as Gaussian and Wishart ensembles, are intro- duced and a discussion of how their corresponding eigenvalues are distributed is presented. Two well-studied applications, namely neural networks and PCA, are discussed where we present how RMT can be applied. Keywords: Random Matrix Theory, Gaussian Ensembles, Covariance, Wishart En- sembles, PCA, Neural Networks URL for electronic version: http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-171649 Andersson, 2020. iii Sammanfattning Medan många stöter på statistik och sannolikhetslära tidigt under sina uni- versitetsstudier så är det sällan slumpmatristeori (RMT) dyker upp förän på forskarnivå. RMT handlar om att studera matriser där elementen följer någon sannolikhetsfördelning och den här uppsatsen presenterar den mest grundläg- gande teorin för slumpmatriser. Vi introducerar Gaussian ensembles, Wishart ensembles samt fördelningarna för dem tillhörande egenvärdena. Avslutningsvis så introducerar vi hur slumpmatriser kan användas i neruonnät och i PCA. Nyckelord: Slumpmatristeori, Gaussian Ensembles, Kovarians, Wishart Ensembles, PCA, Neuronnät URL för elektronisk version: http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-171649 Andersson, 2020.
    [Show full text]
  • Spectral Properties of Self-Adjoint Matrices
    November 15, 2013 SPECTRAL PROPERTIES OF SELF-ADJOINT MATRICES RODICA D. COSTIN Contents 1. Review 3 1.1. The spectrum of a matrix 3 1.2. Brief overview of previous results 3 1.3. More on similar matrices 4 2. Self-adjoint matrices 4 2.1. Definitions 4 2.2. Self-adjoint matrices are diagonalizable I 5 2.3. Further properties of unitary matrices 7 2.4. An alternative look at the proof of Theorem 16 8 2.5. Triangularization by conjugation using a unitary matrix 8 2.6. All self-adjoint matrices are diagonalizable II 10 2.7. Normal matrices 10 2.8. Generic matrices (or: "beware of roundoff errors") 11 2.9. Anti-self-adjoint (skew-symmetric, skew-Hermitian) matrices 13 2.10. Application to linear differential equations 13 2.11. Diagonalization of unitary matrices 14 3. Quadratic forms and Positive definite matrices 15 3.1. Quadratic forms 15 3.2. Critical points of functions of several variables. 18 3.3. Positive definite matrices 20 3.4. Negative definite, semidefinite and indefinite matrices 21 3.5. Applications to critical points of several variables 22 3.6. Application to differential equations: Lyapunov functions 23 3.7. Solving linear systems by minimization 25 3.8. Generalized eigenvalue problems 26 4. The Rayleigh's principle. The minimax theorem for the eigenvalues of a self-adjoint matrix 28 4.1. The Rayleigh's quotient 28 4.2. Extrema of the Rayleigh's quotient 28 4.3. The minimax principle 30 4.4. The minimax principle for the generalized eigenvalue problem. 33 5.
    [Show full text]