
A Robust Eigensolver for 3 × 3 Symmetric Matrices David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. Created: December 6, 2014 Last Modified: August 24, 2021 Contents 1 Introduction 2 2 An Iterative Algorithm 2 3 A Variation of the Iterative Algorithm3 3.1 Case jb12j ≤ jb01j ...........................................4 3.2 Case jb01j ≤ jb12j ...........................................6 3.3 Estimating the Eigenvectors.....................................8 4 Implementation of the Iterative Algorithm8 5 A Noniterative Algorithm 11 5.1 Computing the Eigenvalues..................................... 12 5.2 Computing the Eigenvectors..................................... 14 6 Implementation of the Noniterative Algorithm 18 1 1 Introduction Let A be a 3 × 3 symmetric matrix of real numbers. From linear algebra, A has all real-valued eigenvalues and a full basis of eigenvectors. Let D = Diagonal(λ0; λ1; λ2) be the diagonal matrix whose diagonal entries are the eigenvalues. The eigenvalues are not necessarily distinct. Let R = [U 0 U 1 U 2] be an orthogonal matrix whose columns are linearly independent eigenvectors, ordered consistently with the diagonal entries T of D. That is, AU i = λiU i. The eigendecomposition is A = RDR . The typical presentation in a linear algebra class shows that the eigenvalues are the roots of the cubic polynomial det(A − λI) = 0, where I is the 3 × 3 identity matrix. The left-hand side of the equation is the determinant of the matrix A − λI. Closed-form equations exist for the roots of a cubic polynomial, so in theory one could compute the roots and, for each root, solve the equation (A−λI)U = 0 for nonzero vectors U. Although theoretically correct, computing roots of the cubic polynomial using the closed-form equations and floating-point arithmetic can lead to inaccurate results. 2 An Iterative Algorithm The matrix notation in this document uses 0-based indexing; that is, if M is a 3 × 3 matrix, the element in row r and column c is mrc where 0 ≤ r ≤ 2 and 0 ≤ c ≤ 2. The superdiagonal elements are m01 and m12 and subdiagonal elements are m10 and m21. For symmetric M, the two lists are the same. The numerical algorithms for computing the eigenvalues and eigenvectors modify elements mrc for c ≥ r. The elements mrc for c < r are not stored because they are known by symmetry. The classical numerical algorithm for computing the eigenvalues and eigenvectors of A initially uses a House- T holder reflection matrix H to compute B = H AH so that b02 = 0. This makes B a symmetric tridiagonal T matrix. The matrix H is a reflection, so H = H. A sequence of Givens rotations Gk are used to drive the superdiagonal elements to zero. This is an iterative process for which a termination condition is required. If ` rotations are applied, the reductions are T T T 0 G`−1 ··· G0 H AHG0 ··· G`−1 = D = D + E (1) where D0 is a symmetric tridiagonal matrix. The matrix D is diagonal and the matrix E is symmetric tridi- agonal with diagonal elements 0 and superdiagonal elements that are sufficiently smaller than the diagonal elements of D adjacent to them. The diagonal elements of D are reasonable approximations to the eigen- 0 values of A. The orthogonal matrix R = HG0 ··· G`−1 has columns that are reasonable approximations to the eigenvectors of A. The source code that implements this algorithm (for any size symmetric matrix) is in SymmetricEigensolver.h and is an implementation of Algorithm 8.3.3 (Symmetric QR Algorithm) described in [1]. Algorithm 8.3.1 (Householder Tridiagonalization) reduces a symmetric n × n matrix A to a tridiagonal matrix T using n − 2 Householder reflections. Algorithm 8.3.2 (Implicit Symmetric QR Step with Wilkinson Shift) reduces T to the (nearly) diagonal matrix D0 of equation (1) using a finite number of Givens rotations, the number depending on a termination condition. The numerical errors are the elements of E = RTAR − D. Algorithm : 8.3.3 mentions that one expects jEj = µjAj, where jAj denotes the Frobenius norm of A and where µ is the unit roundoff for the floating-point arithmetic: 2−23 for float, which is FLT EPSILON = 1.192092896e-7f, and 2−52 for double, which is DBL EPSILON = 2.2204460492503131e-16. If C is the current tridiagonal matrix obtained from the Givens rotations, the book uses the condition 2 jci;i+1j ≤ "jci;i + ci+1;i+1j to determine when the reduction decouples to smaller problems. The value of " > 0 is chosen to be small. That is, when a superdiagonal term is effectively zero, the iterations may be applied separately to two tridiagonal submatrices. The first submatrix is the upper-left square block with rows and column indices from 0 to i and the second submatrix is the lower-right square block with row and column indices from i + 1 to n − 1. The two submatrices are processed independently. The Geometric Tools source code is implemented instead to provide a termination condition when floating-point arithmetic is used, Listing 1. The condition used to decompose the current tridiagonal matrix into two tridiagonal subma- trices. The matrix element ci;j is written in code as c(i; j). bool Converged ( bool aggressive , Real diagonal0 , Real diagonal1 , Real superdiagonal) f i f (aggressive) f Test whether the superdiagonal term ci;i+1 is zero. r e t u r n superdiagonal == 0; g e l s e f // Test whether the superdiagonal term ci;i+1 is effectively zero // compared to its diagonal neighbors. Real sum = j d i a g o n a l 0 j + j d i a g o n a l 1 j ; r e t u r n sum + j superdiagonal j == sum ; g g Using the nonaggressive condition, the superdiagonal elements are effectively zero relative to their diagonal neighbors. The unit tests have shown that this interpretation of decoupling is effective. 3 A Variation of the Iterative Algorithm T The variation uses the Householder transformation to compute B = H AH where b02 = 0. Let c = cos θ and s = sin θ for some angle θ. The right-hand side is 2 3 2 3 2 3 c s 0 a00 a01 a02 c s 0 6 7 6 7 6 7 T 6 7 6 7 6 7 H AH = 6 s −c 0 7 6 a01 a11 a12 7 6 s −c 0 7 4 5 4 5 4 5 0 0 1 a02 a12 a22 0 0 1 2 3 c(ca00 + sa01) + s(ca01 + sa11) s(ca00 + sa01) − c(ca01 + sa11) ca02 + sa12 6 7 6 7 = 6 s(ca00 + sa01) − c(ca01 + sa11) s(sa00 − ca01) − c(sa01 − ca11) sa02 − ca12 7 (2) 4 5 ca02 + sa12 sa02 − ca12 a22 2 3 b00 b01 b02 6 7 6 7 = 6 b01 b11 b12 7 4 5 b02 b12 b22 3 Require 0 = b02 = ca02 + sa12 = (c; s) · (a02; a12), which implies (c; s) is perpendicular to (a02; a12). Equivalently, (c; s) is parallel to (a12; −a02). The vector (c; s) is obtained by normalizing (a12; −a02), say, p 2 2 (c; s) = σ(a12; −a02)= a02 + a12 for sign σ 2 {−1; +1g. Either choice of sign is allowed, but the Geometric Tools source code chooses σ so that cos θ ≤ 0, which allows some source code to be shared with the Givens reductions. Generally, given an expression (cos φ, sin φ) · (−v; u) = 0 where (−v; u) 6= (0; 0),p the vector (cos φ, sin φ) is parallel to (u; v) and is obtained by normalizing (u; v), say (cos φ, sin φ) = σ(u; v)= u2 + v2 for σ 2 {−1; +1g. The source code uses σ = − Sign(u), (u; v) (−|uj; − Sign(u)v) (cos φ, sin φ) = − Sign(u) p = p (3) u2 + v2 u2 + v2 Pseudocode for the computation is shown in Listing2. Listing 2. Given a pair (u; v), the pseudocode solves (cos φ, sin φ) · (−v; u) = 0 for (cos φ, sin φ). v o i d GetCosSin(Real u, Real v, Real& cosPhi, Real& sinPhi) f Real length = sqrt(u * u + v * v ) ; i f ( l e n g t h > 0) f cosPhi = u / length; sinPhi = v / length; i f ( c o s P h i > 0) f c o s P h i = =c o s P h i ; s i n P h i = =s i n P h i ; g g e l s e f c o s P h i = =1; s i n P h i = 0 ; g g The pair (c; s) = (cos φ, sin φ) is constructed using GetCosSin(a12, -a02, c, s) or GetCosSin(-a12, a02, c, s). Reflection matrices are used rather than Givens rotations to force the superdiagonal terms to zero. The next two subsections illustrate this. 3.1 Case jb12j ≤ jb01j Let jb12j ≤ jb01j. Choose a sequence of reflection matrices to drive b12 to zero; the first one is 2 3 c1 0 −s1 6 7 6 7 G1 = 6 s1 0 c1 7 (4) 4 5 0 1 0 4 where c1 = cos θ1 and s1 = sin θ1 for some angle θ1. Define the product 2 2 2 3 c1(c1b00 + s1b01) + s1(c1b01 + s1b11) s1b12 s1c1(b11 − b00) + (c1 − s1)b01 6 7 T 6 7 P1 = G1 BG1 = 6 s1b12 b22 c1b12 7 (5) 4 5 2 2 s1c1(b11 − b00) + (c1 − s1)b01 c1b12 s1(s1b00 − c1b01) − c1(s1b01 − c1b11) P1 must be tridiagonal, which requires 2 2 0 = s1c1(b11 − b00) + (c1 − s1)b01 = (cos(2θ1); sin(2θ1)) · (b01; (b11 − b00)=2) (6) Use equation (3) with u = (b11 − b00)=2 and v = −b01 to compute (cos(2θ1); sin(2θ1)) with cos(2θ1) ≤ 0.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages18 Page
-
File Size-