Chapter 3 of Calculus++: the Symmetric Eigenvalue Problem

Chapter 3 of Calculus++: the Symmetric Eigenvalue Problem

Chapter 3 of Calculus++: The Symmetric Eigenvalue Problem by Eric A Carlen Professor of Mathematics Georgia Tech c 2003 by the author, all rights reserved 1-1 Table of Contents Section 1: Diagonalizing 2 × 2 symmetric matrices 1.1 An explicit formula :::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 1-3 1.2 2 × 2 orthogonal matrices: rotations and reflection ::::::::::::::::::::::::::::1-6 Section 2: Jacobi's Algorithm 2.1 Why iterate? :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 1-8 2.2 Will the Jacobi algorithm diagonalize any symmetric matrix? ::::::::::::::::1-12 2.3 Proof of the key lemma ::::::::::::::::::::::::::::::::::::::::::::::::::::: 1-16 2.4 Convergence of the Jacobi algorithm :::::::::::::::::::::::::::::::::::::::: 1-17 Section 3: The eigenvalues of almost diagonal matrices 3.1 The Gershgorin Disk Theorem :::::::::::::::::::::::::::::::::::::::::::::: 1-19 3.2 Application to Jacobi iteration :::::::::::::::::::::::::::::::::::::::::::::: 1-24 3.3 Continuity of Eigenvalues ::::::::::::::::::::::::::::::::::::::::::::::::::: 1-28 Section 4: The singular value decomposition 4.1 What is a singular value decomposition? :::::::::::::::::::::::::::::::::::: 1-34 4.2 The singular value decomposition and least squares solutions :::::::::::::::: 1-37 4.3 Finding a singular value decomposition :::::::::::::::::::::::::::::::::::::: 1-40 Section 5: Geometry and the singular value decomposition 5.1 The image of the unit circle under a linear transformation ::::::::::::::::::: 1-46 5.2 The singular value decomposition and volume ::::::::::::::::::::::::::::::: 1-50 5.3 The singular value decomposition and low rank approximation :::::::::::::::1-50 1-2 Section 1: Diagonalizing 2 × 2 Symmetric Matrices 1.1: An explicit formula Symmetric matrices are special. For instance, they always have real eigenvalues. There are several ways to see this, but for 2×2 symmetric matrices, direct computation is simple enough: Let A be any symmetric 2 × 2 matrix: a b A = : b d a − t b Then A − tI = so that b d − t det(A − tI) = (a − t)(d − t) − b2 = t2 − (a + d)t + ad − b2 : Hence the eigenvalues of A are the roots of t2 − (a + d)t + ad − b2 = 0 : (1:1) Completing the square, we obtain a + d2 a + d2 t − = b2 − ad + 2 2 a2 + d2 + 2ad = b2 − ad + 4 a2 + d2 − 2ad = b2 + 4 a − d2 = b2 + 2 s a + d a − d2 a − d2 Hence, (1.1) becomes t = ± b2 + . Since b2 + is the sum of 2 2 2 two squares, it is positive, and so the square root is real. Therefore, the two eigenvalues are s s a + d a − d2 a + d a − d2 µ = + b2 + and µ = − b2 + : (1:2) + 2 2 − 2 2 We have just written down an explicit formula for the eigenvalues of the 2×2 symmetric a b matrix A = . As you can see from the formula, the eigenvalues are both real. b d 1-3 There is even more that is special about n × n symmetric matrices: They can always be diagonalized, and by an orthogonal matrix at that. Again, in the 2 × 2 case, direct computation leads to an explicit formula. Let B = A − µ+I. Then an non zero vector v is an eigenvector of A with eigenvalue r1 µ+ if and only if Bv = 0. Now write B in row vector form: B = . Now, by a basic r2 r r · v formula for matrix multiplication, Bv = 1 v = 1 . So if v is an eigenvector with r2 r2 · v eigenvalue µ+, then r1 · v = 0 and r2 · v = 0 : ? Now r1 · v = 0 if and only if v is a multiple of r1 . This means that a vector v is an ? ? eigenvector of A with eigenvalue µ+ if and only if v is a multiple of r1 . In particular, r1 is an eigenvector of A with eigenvalue µ+. Normalizing this, we define 1 ? u1 = r1 : jr1j This is a unit vector, and an eigenvector of A with eigenvalue µ+. Next, we use another basic fact about symmetric matrices: Eigenvectors corresponding to distinct eigenvalues are orthogonal. So as long as µ− 6= µ+, the eigenvectors of A with ? eigenvalue µ− must be orthogonal to u1. This means that u1 is an eigenvector of A with eigenvalue µ−. It is also a unit vector, and orthogonal to u1, so if we define u2 by ? u2 = u1 ; then fu1; u2g is an orthonormal basis of IR2 consisting of eigenvectors of A. What if the eigenvalues are the same? You see from (1.2) that the two eigenvalues are the same if and only if b2 = 0 and (a − d)2 = 0, which means that A = aI, in which case A is already diagonal, and every vector in IR2 is an eigenvector of A with eigenvalue a. Hence the same formulas apply in this case as well. Now form the matrix U defined by U = [u1; u2] : Then µ+ 0 AU = A[u1; u2] = [Au1;Au2] = [µ+u1; µ−u2] = [u1; u2] : 0 µ− If we define D to be the diagonal matrix µ 0 D = + ; 0 µ− 1-4 then we can rewrite this as AU = UD: (1:3) Now since U has orthonormal columns, it is an orthognal matrix, and hence U t is the inverse of U. Therefore, (1.3) can be rewritten as D = U tAU : We summarize all of this in the following theorem: Theorem 1 (Eigenvectors and eigenvalues for 2 × 2 symmetric matrices) Let a b A = be any 2 × 2 symmetric matrix. Then the eigenvalues of A are b d s s a + d a − d2 a + d a − d2 µ = + b2 + and µ = − b2 + : (1:4) + 2 2 − 2 2 Moreover, if we define r1 and r2 by r1 A − µ+I = ; r2 and put 1 ? ? u1 = r1 and u2 = u1 ; (1:5) jr1j 2 then fu1; u2g is an orthonormal basis of IR consisting of eigenvectors of A, and with µ+ 0 U = [u1; u2] and D = ; (1:6) 0 µ− U tAU = D: (1:7) Example 1 (Finding the eigenvectors and eigenvalues of a 2 × 2 symmetric matrix) Let A = h 3 2 i h a b i . With A = , we have 2 6 b d a = 3 b = 2 d = 6 : 9 5 Using (1.4), we find that µ± = ± ; i.e., 2 2 µ+ = 7 and µ− = 2 : Now, h 3 − 7 2 i h −4 2 i A − µ I = = : + 2 6 − 7 2 −1 1-5 h −4 i The first row of this matrix { written as a column vector { is r = . Hence we have 1 2 1 h −1 i 1 h 2 i u1 − p and u2 = p : (1:8) 5 −2 5 −1 h 3 2 i Example 2 (Diagonalizing a 2 × 2 symmetric matrix) Let A be the 2 × 2 matrix A = that 2 6 we considered in Example 1. There we found that the eigenvalues are 7 and 2, and we found corresponding 1 h −1 i 1 h 2 i unit eigenvectors u1 p and u2 = p . Hence from (1.6), we have 5 −2 5 −1 1 h −1 2 i U = p 5 −2 −1 and h 7 0 i D = : 0 2 As you can check, U tAU = D, in agreemant with (1.7). Theorem 1 provides one way to diagonalize a 2×2 symmetric matrix with an orthogonal matrix U. However, there is something special about it: The matrix U is not only an orthogonal matrix; it is a rotation matrix, and in D, the eigenvalues are listed in decreasing order along the diagonal. This turns out to be useful, and to explain it better, we recall a few facts about 2 × 2 orthogonal matrices. 1.2: 2 × 2 orthogonal matrices: rotations and reflections Let U = [u1; u2] be any orthogonal matrix. Then u1 is a unit vector, so cos(θ) u = 1 sin(θ) for some θ with 0 ≤ θ < 2π. Next, u2 is orthogonal to u1, but there are exactly two unit vectors that are orthogonal ? to u1, namely ±u1 . Therefore, − sin(θ) sin(θ) either u = or else u = : 2 cos(θ) 2 − cos(θ) In the first case, cos(θ) − sin(θ) U = ; (1:9) sin(θ) cos(θ) while in the second case, cos(θ) sin(θ) U = : (1:10) sin(θ) − cos(θ) 1-6 The matrix U in (1.9) describes a counterclockwise rotation through the angle θ. Since this is the sort of U we get using Theorem 1, we see that this theorem provides us a diagonalization in terms of a rotation matrix. what we have said so far is all that is really important in what follows, but you may be wondering what sort of transformation might be encoded in (1.10). There is a simple answer: The matrix U in (1.10) describes a reflection. To see this, define φ = θ=2. Then, cos(θ) sin(θ) cos2(φ) − sin2(φ) 2 sin(φ) cos(φ) = sin(θ) − cos(θ) 2 sin(φ) cos(φ) sin2(φ) − cos2(φ) 1 0 sin2(φ) − sin(φ) cos(φ) = − 2 : 0 1 − sin(φ) cos(φ) cos2(φ) cos(φ) From here, one easily sees that if u = , φ sin(φ) cos(θ) sin(θ) = I − 2(u?)(u?)t : sin(θ) − cos(θ) φ φ From here it follows that with U given by (1.10), ? ? Uuφ = uφ and Uuφ = uφ : This shows that the matrix U in (1.10) is the reflection about the line through the origin and uφ. Problems h 1 2 i 1 Let A = . Use Theorem 1 to find the eigenvectors and eigenvalues of A, and find an orthogonal 2 4 matrix U that diagonalizes A. h 4 2 i 2 Let A = . Use Theorem 1 to find the eigenvectors and eigenvalues of A, and find an orthogonal 2 4 matrix U that diagonalizes A.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    53 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us