Orthogonal Matrices and Diagonalization

Total Page:16

File Type:pdf, Size:1020Kb

Orthogonal Matrices and Diagonalization LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied, with or without permission from the author. However, this document may not be kept on any information storage and retrieval system without permission from the author, unless such system is not accessible to any individuals other than its owners. Chapter 10 ORTHOGONAL MATRICES 10.1. Introduction Definition. A square matrix A with real entries and satisfying the condition A−1 = At is called an orthogonal matrix. Example 10.1.1. Consider the euclidean space R2 with the euclidean inner product. The vectors u1 =(1, 0) and u2 =(0, 1) form an orthonormal basis B = {u1, u2}. Let us now rotate u1 and u2 anticlockwise by an angle θ to obtain v1 = (cos θ, sin θ) and v2 =(− sin θ, cos θ). Then C = {v1, v2} is also an orthonormal basis. u2 v2 v1 θ u1 Chapter 10 : Orthogonal Matrices page 1 of 11 <02> Linear Algebra c W W L Chen, 1997, 2008 The transition matrix from the basis C to the basis B is given by cos θ − sin θ P =([v ]B [v ]B )= . 1 2 sin θ cos θ Clearly cos θ sin θ P −1 = P t = . − sin θ cos θ In fact, our example is a special case of the following general result. PROPOSITION 10A. Suppose that B = {u1,...,un} and C = {v1,...,vn} are two orthonormal bases of a real inner product space V . Then the transition matrix P from the basis C to the basis B is an orthogonal matrix. Example 10.1.2. The matrix ⎛ ⎞ 1/3 −2/32/3 A = ⎝ 2/3 −1/3 −2/3 ⎠ 2/32/31/3 is orthogonal, since ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ 1/32/32/3 1/3 −2/32/3 100 AtA = ⎝ −2/3 −1/32/3 ⎠ ⎝ 2/3 −1/3 −2/3 ⎠ = ⎝ 010⎠ . 2/3 −2/31/3 2/32/31/3 001 Note also that the row vectors of A, namely (1/3, −2/3, 2/3), (2/3, −1/3, −2/3) and (2/3, 2/3, 1/3) are orthonormal. So are the column vectors of A. In fact, our last observation is not a coincidence. PROPOSITION 10B. Suppose that A is an n × n matrix with real entries. Then (a) A is orthogonal if and only if the row vectors of A form an orthonormal basis of Rn under the euclidean inner product; and (b) A is orthogonal if and only if the column vectors of A form an orthonormal basis of Rn under the euclidean inner product. Proof. We shall only prove (a), since the proof of (b) is almost identical. Let r1,...,rn denote the row vectors of A. Then ⎛ ⎞ r1 · r1 ... r1 · rn t ⎝ . ⎠ AA = . rn · r1 ... rn · rn It follows that AAt = I if and only if for every i, j =1,...,n,wehave 1ifi = j, ri · rj = 0ifi = j, if and only if r1,...,rn are orthonormal. PROPOSITION 10C. Suppose that A is an n × n matrix with real entries. Suppose further that the inner product in Rn is the euclidean inner product. Then the following are equivalent: (a) A is orthogonal. (b) For every x ∈ Rn, we have Ax = x. (c) For every u, v ∈ Rn, we have Au · Av = u · v. Chapter 10 : Orthogonal Matrices page 2 of 11 <03> Linear Algebra c W W L Chen, 1997, 2008 Proof. ((a)⇒(b)) Suppose that A is orthogonal, so that AtA = I. It follows that for every x ∈ Rn,we have Ax2 = Ax · Ax = xtAtAx = xtIx = xtx = x · x = x2. ((b)⇒(c)) Suppose that Ax = x for every x ∈ Rn. Then for every u, v ∈ Rn,wehave A ·A 1 A A 2 − 1 A −A 2 1 A 2 − 1 A − 2 1 2 − 1 − 2 · . u v = 4 u+ v 4 u v = 4 (u+v) 4 (u v) = 4 u+v 4 u v = u v ((c)⇒(a)) Suppose that Au · Av = u · v for every u, v ∈ Rn. Then Iu · v = u · v = Au · Av = vtAtAu = AtAu · v, so that (AtA − I)u · v =0. In particular, this holds when v =(AtA − I)u, so that (AtA − I)u · (AtA − I)u =0, whence (AtA − I)u = 0, (1) in view of Proposition 9A(d). But then (1) is a system of n homogeneous linear equations in n unknowns satisfied by every u ∈ Rn. Hence the coefficient matrix AtA−I must be the zero matrix, and so AtA = I. Proof of Proposition 10A. For every u ∈ V , we can write u = β1u1 + ...+ βnun = γ1v1 + ...+ γnvn, where β1,...,βn,γ1,...,γn ∈ R, and where B = {u1,...,un} and C = {v1,...,vn} are two orthonormal bases of V . Then n n n 2 2 u = u, u = β1u1 + ...+ βnun,β1u1 + ...+ βnun = βiβjui, uj = βi i=1 j=1 i=1 =(β1,...,βn) · (β1,...,βn). Similarly, n n n 2 2 u = u, u = γ1v1 + ...+ γnvn,γ1v1 + ...+ γnvn = γiγjvi, vj = γi i=1 j=1 i=1 =(γ1,...,γn) · (γ1,...,γn). n It follows that in R with the euclidean norm, we have [u]B = [u]C, and so P [u]C = [u]C for every u ∈ V . Hence P x = x holds for every x ∈ Rn. It now follows from Proposition 10C that P is orthogonal. Chapter 10 : Orthogonal Matrices page 3 of 11 <04> Linear Algebra c W W L Chen, 1997, 2008 10.2. Eigenvalues and Eigenvectors In this section, we give a brief review on eigenvalues and eigenvectors first discussed in Chapter 7. Suppose that ⎛ ⎞ a11 ... a1n ⎝ . ⎠ A = . an1 ... ann is an n × n matrix with real entries. Suppose further that there exist a number λ ∈ R and a non-zero vector v ∈ Rn such that Av = λv. Then we say that λ is an eigenvalue of the matrix A, and that v is an eigenvector corresponding to the eigenvalue λ. In this case, we have Av = λv = λIv, where I is the n × n identity matrix, so that (A − λI)v = 0. Since v ∈ Rn is non-zero, it follows that we must have det(A − λI)=0. (2) In other words, we must have ⎛ ⎞ a11 − λa12 ... a1n ⎜ a a − λa⎟ ⎜ 21 22 2n ⎟ det ⎝ . ⎠ =0. .. an1 an2 ... ann − λ Note that (2) is a polynomial equation. The polynomial det(A−λI) is called the characteristic polynomial of the matrix A. Solving this equation (2) gives the eigenvalues of the matrix A. On the other hand, for any eigenvalue λ of the matrix A, the set {v ∈ Rn :(A − λI)v = 0} (3) is the nullspace of the matrix A−λI, and forms a subspace of Rn. This space (3) is called the eigenspace corresponding to the eigenvalue λ. Suppose now that A has eigenvalues λ1,...,λn ∈ R, not necessarily distinct, with corresponding n eigenvectors v1,...,vn ∈ R , and that v1,...,vn are linearly independent. Then it can be shown that P −1AP = D, where ⎛ ⎞ λ1 ⎝ . ⎠ P =(v1 ... vn ) and D = .. λn In fact, we say that A is diagonalizable if there exists an invertible matrix P with real entries such that P −1AP is a diagonal matrix with real entries. It follows that A is diagonalizable if its eigenvectors form a basis of Rn. In the opposite direction, one can show that if A is diagonalizable, then it has n linearly independent eigenvectors in Rn. It therefore follows that the question of diagonalizing a matrix A with real entries is reduced to one of linear independence of its eigenvectors. We now summarize our discussion so far. Chapter 10 : Orthogonal Matrices page 4 of 11 <05> Linear Algebra c W W L Chen, 1997, 2008 DIAGONALIZATION PROCESS. Suppose that A is an n × n matrix with real entries. (1) Determine whether the n roots of the characteristic polynomial det(A − λI) are real. (2) If not, then A is not diagonalizable. If so, then find the eigenvectors corresponding to these eigen- values. Determine whether we can find n linearly independent eigenvectors. (3) If not, then A is not diagonalizable. If so, then write ⎛ ⎞ λ1 ⎝ . ⎠ P =(v1 ... vn ) and D = .. , λn n where λ1,...,λn ∈ R are the eigenvalues of A and where v1,...,vn ∈ R are respectively their corresponding eigenvectors. Then P −1AP = D. In particular, it can be shown that if A has distinct eigenvalues λ1,...,λn ∈ R, with corresponding n eigenvectors v1,...,vn ∈ R , then v1,...,vn are linearly independent. It follows that all such matrices A are diagonalizable. 10.3. Orthonormal Diagonalization We now consider the euclidean space Rn an as inner product space with the euclidean inner product. Given any n × n matrix A with real entries, we wish to find out whether there exists an orthonormal basis of Rn consisting of eigenvectors of A. Recall that in the Diagonalization process discussed in the last section, the columns of the matrix P are eigenvectors of A, and these vectors form a basis of Rn. It follows from Proposition 10B that this basis is orthonormal if and only if the matrix P is orthogonal. Definition. An n × n matrix A with real entries is said to be orthogonally diagonalizable if there exists an orthogonal matrix P with real entries such that P −1AP = P tAP is a diagonal matrix with real entries. First of all, we would like to determine which matrices are orthogonally diagonalizable. For those that are, we then need to discuss how we may find an orthogonal matrix P to carry out the diagonalization.
Recommended publications
  • Diagonalizing a Matrix
    Diagonalizing a Matrix Definition 1. We say that two square matrices A and B are similar provided there exists an invertible matrix P so that . 2. We say a matrix A is diagonalizable if it is similar to a diagonal matrix. Example 1. The matrices and are similar matrices since . We conclude that is diagonalizable. 2. The matrices and are similar matrices since . After we have developed some additional theory, we will be able to conclude that the matrices and are not diagonalizable. Theorem Suppose A, B and C are square matrices. (1) A is similar to A. (2) If A is similar to B, then B is similar to A. (3) If A is similar to B and if B is similar to C, then A is similar to C. Proof of (3) Since A is similar to B, there exists an invertible matrix P so that . Also, since B is similar to C, there exists an invertible matrix R so that . Now, and so A is similar to C. Thus, “A is similar to B” is an equivalence relation. Theorem If A is similar to B, then A and B have the same eigenvalues. Proof Since A is similar to B, there exists an invertible matrix P so that . Now, Since A and B have the same characteristic equation, they have the same eigenvalues. > Example Find the eigenvalues for . Solution Since is similar to the diagonal matrix , they have the same eigenvalues. Because the eigenvalues of an upper (or lower) triangular matrix are the entries on the main diagonal, we see that the eigenvalues for , and, hence, are .
    [Show full text]
  • Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases
    Week 11 Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases 11.1 Opening Remarks 11.1.1 Low Rank Approximation * View at edX 463 Week 11. Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases 464 11.1.2 Outline 11.1. Opening Remarks..................................... 463 11.1.1. Low Rank Approximation............................. 463 11.1.2. Outline....................................... 464 11.1.3. What You Will Learn................................ 465 11.2. Projecting a Vector onto a Subspace........................... 466 11.2.1. Component in the Direction of ............................ 466 11.2.2. An Application: Rank-1 Approximation...................... 470 11.2.3. Projection onto a Subspace............................. 474 11.2.4. An Application: Rank-2 Approximation...................... 476 11.2.5. An Application: Rank-k Approximation...................... 478 11.3. Orthonormal Bases.................................... 481 11.3.1. The Unit Basis Vectors, Again........................... 481 11.3.2. Orthonormal Vectors................................ 482 11.3.3. Orthogonal Bases.................................. 485 11.3.4. Orthogonal Bases (Alternative Explanation).................... 488 11.3.5. The QR Factorization................................ 492 11.3.6. Solving the Linear Least-Squares Problem via QR Factorization......... 493 11.3.7. The QR Factorization (Again)........................... 494 11.4. Change of Basis...................................... 498 11.4.1. The Unit Basis Vectors,
    [Show full text]
  • Orthogonal Bases and the -So in Section 4.8 We Discussed the Problem of Finding the Orthogonal Projection P
    Orthogonal Bases and the -So In Section 4.8 we discussed the problemR. of finding the orthogonal projection p the vector b into the V of , . , suhspace the vectors , v2 ho If v1 v,, form a for V, and the in x n matrix A has these basis vectors as its column vectors. ilt the orthogonal projection p is given by p = Ax where x is the (unique) solution of the normal system ATAx = A7b. The formula for p takes an especially simple and attractive Form when the ba vectors , . .. , v1 v are mutually orthogonal. DEFINITION Orthogonal Basis An orthogonal basis for the suhspacc V of R” is a basis consisting of vectors , ,v,, that are mutually orthogonal, so that v v = 0 if I j. If in additii these basis vectors are unit vectors, so that v1 . = I for i = 1. 2 n, thct the orthogonal basis is called an orthonormal basis. Example 1 The vectors = (1, 1,0), v2 = (1, —1,2), v3 = (—1,1,1) form an orthogonal basis for . We “normalize” ‘ R3 can this orthogonal basis 1w viding each basis vector by its length: If w1=—- (1=1,2,3), lvii 4.9 Orthogonal Bases and the Gram-Schmidt Algorithm 295 then the vectors /1 I 1 /1 1 ‘\ 1 2” / I w1 0) W2 = —— _z) W3 for . form an orthonormal basis R3 , . ..., v, of the m x ii matrix A Now suppose that the column vectors v v2 form an orthogonal basis for the suhspacc V of R’. Then V}.VI 0 0 v2.v ..
    [Show full text]
  • MATH 2370, Practice Problems
    MATH 2370, Practice Problems Kiumars Kaveh Problem: Prove that an n × n complex matrix A is diagonalizable if and only if there is a basis consisting of eigenvectors of A. Problem: Let A : V ! W be a one-to-one linear map between two finite dimensional vector spaces V and W . Show that the dual map A0 : W 0 ! V 0 is surjective. Problem: Determine if the curve 2 2 2 f(x; y) 2 R j x + y + xy = 10g is an ellipse or hyperbola or union of two lines. Problem: Show that if a nilpotent matrix is diagonalizable then it is the zero matrix. Problem: Let P be a permutation matrix. Show that P is diagonalizable. Show that if λ is an eigenvalue of P then for some integer m > 0 we have λm = 1 (i.e. λ is an m-th root of unity). Hint: Note that P m = I for some integer m > 0. Problem: Show that if λ is an eigenvector of an orthogonal matrix A then jλj = 1. n Problem: Take a vector v 2 R and let H be the hyperplane orthogonal n n to v. Let R : R ! R be the reflection with respect to a hyperplane H. Prove that R is a diagonalizable linear map. Problem: Prove that if λ1; λ2 are distinct eigenvalues of a complex matrix A then the intersection of the generalized eigenspaces Eλ1 and Eλ2 is zero (this is part of the Spectral Theorem). 1 Problem: Let H = (hij) be a 2 × 2 Hermitian matrix. Use the Min- imax Principle to show that if λ1 ≤ λ2 are the eigenvalues of H then λ1 ≤ h11 ≤ λ2.
    [Show full text]
  • 3.3 Diagonalization
    3.3 Diagonalization −4 1 1 1 Let A = 0 1. Then 0 1 and 0 1 are eigenvectors of A, with corresponding @ 4 −4 A @ 2 A @ −2 A eigenvalues −2 and −6 respectively (check). This means −4 1 1 1 −4 1 1 1 0 1 0 1 = −2 0 1 ; 0 1 0 1 = −6 0 1 : @ 4 −4 A @ 2 A @ 2 A @ 4 −4 A @ −2 A @ −2 A Thus −4 1 1 1 1 1 −2 −6 0 1 0 1 = 0−2 0 1 − 6 0 11 = 0 1 @ 4 −4 A @ 2 −2 A @ @ −2 A @ −2 AA @ −4 12 A We have −4 1 1 1 1 1 −2 0 0 1 0 1 = 0 1 0 1 @ 4 −4 A @ 2 −2 A @ 2 −2 A @ 0 −6 A 1 1 (Think about this). Thus AE = ED where E = 0 1 has the eigenvectors of A as @ 2 −2 A −2 0 columns and D = 0 1 is the diagonal matrix having the eigenvalues of A on the @ 0 −6 A main diagonal, in the order in which their corresponding eigenvectors appear as columns of E. Definition 3.3.1 A n × n matrix is A diagonal if all of its non-zero entries are located on its main diagonal, i.e. if Aij = 0 whenever i =6 j. Diagonal matrices are particularly easy to handle computationally. If A and B are diagonal n × n matrices then the product AB is obtained from A and B by simply multiplying entries in corresponding positions along the diagonal, and AB = BA.
    [Show full text]
  • R'kj.Oti-1). (3) the Object of the Present Article Is to Make This Estimate Effective
    TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 259, Number 2, June 1980 EFFECTIVE p-ADIC BOUNDS FOR SOLUTIONS OF HOMOGENEOUS LINEAR DIFFERENTIAL EQUATIONS BY B. DWORK AND P. ROBBA Dedicated to K. Iwasawa Abstract. We consider a finite set of power series in one variable with coefficients in a field of characteristic zero having a chosen nonarchimedean valuation. We study the growth of these series near the boundary of their common "open" disk of convergence. Our results are definitive when the wronskian is bounded. The main application involves local solutions of ordinary linear differential equations with analytic coefficients. The effective determination of the common radius of conver- gence remains open (and is not treated here). Let K be an algebraically closed field of characteristic zero complete under a nonarchimedean valuation with residue class field of characteristic p. Let D = d/dx L = D"+Cn_lD'-l+ ■ ■■ +C0 (1) be a linear differential operator with coefficients meromorphic in some neighbor- hood of the origin. Let u = a0 + a,jc + . (2) be a power series solution of L which converges in an open (/>-adic) disk of radius r. Our object is to describe the asymptotic behavior of \a,\rs as s —*oo. In a series of articles we have shown that subject to certain restrictions we may conclude that r'KJ.Oti-1). (3) The object of the present article is to make this estimate effective. At the same time we greatly simplify, and generalize, our best previous results [12] for the noneffective form. Our previous work was based on the notion of a generic disk together with a condition for reducibility of differential operators with unbounded solutions [4, Theorem 4].
    [Show full text]
  • Math 2331 – Linear Algebra 6.2 Orthogonal Sets
    6.2 Orthogonal Sets Math 2331 { Linear Algebra 6.2 Orthogonal Sets Jiwen He Department of Mathematics, University of Houston [email protected] math.uh.edu/∼jiwenhe/math2331 Jiwen He, University of Houston Math 2331, Linear Algebra 1 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix 6.2 Orthogonal Sets Orthogonal Sets: Examples Orthogonal Sets: Theorem Orthogonal Basis: Examples Orthogonal Basis: Theorem Orthogonal Projections Orthonormal Sets Orthonormal Matrix: Examples Orthonormal Matrix: Theorems Jiwen He, University of Houston Math 2331, Linear Algebra 2 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix Orthogonal Sets Orthogonal Sets n A set of vectors fu1; u2;:::; upg in R is called an orthogonal set if ui · uj = 0 whenever i 6= j. Example 82 3 2 3 2 39 < 1 1 0 = Is 4 −1 5 ; 4 1 5 ; 4 0 5 an orthogonal set? : 0 0 1 ; Solution: Label the vectors u1; u2; and u3 respectively. Then u1 · u2 = u1 · u3 = u2 · u3 = Therefore, fu1; u2; u3g is an orthogonal set. Jiwen He, University of Houston Math 2331, Linear Algebra 3 / 12 6.2 Orthogonal Sets Orthogonal Sets Basis Projection Orthonormal Matrix Orthogonal Sets: Theorem Theorem (4) Suppose S = fu1; u2;:::; upg is an orthogonal set of nonzero n vectors in R and W =spanfu1; u2;:::; upg. Then S is a linearly independent set and is therefore a basis for W . Partial Proof: Suppose c1u1 + c2u2 + ··· + cpup = 0 (c1u1 + c2u2 + ··· + cpup) · = 0· (c1u1) · u1 + (c2u2) · u1 + ··· + (cpup) · u1 = 0 c1 (u1 · u1) + c2 (u2 · u1) + ··· + cp (up · u1) = 0 c1 (u1 · u1) = 0 Since u1 6= 0, u1 · u1 > 0 which means c1 = : In a similar manner, c2,:::,cp can be shown to by all 0.
    [Show full text]
  • CLIFFORD ALGEBRAS Property, Then There Is a Unique Isomorphism (V ) (V ) Intertwining the Two Inclusions of V
    CHAPTER 2 Clifford algebras 1. Exterior algebras 1.1. Definition. For any vector space V over a field K, let T (V ) = k k k Z T (V ) be the tensor algebra, with T (V ) = V V the k-fold tensor∈ product. The quotient of T (V ) by the two-sided⊗···⊗ ideal (V ) generated byL all v w + w v is the exterior algebra, denoted (V ).I The product in (V ) is usually⊗ denoted⊗ α α , although we will frequently∧ omit the wedge ∧ 1 ∧ 2 sign and just write α1α2. Since (V ) is a graded ideal, the exterior algebra inherits a grading I (V )= k(V ) ∧ ∧ k Z M∈ where k(V ) is the image of T k(V ) under the quotient map. Clearly, 0(V )∧ = K and 1(V ) = V so that we can think of V as a subspace of ∧(V ). We may thus∧ think of (V ) as the associative algebra linearly gener- ated∧ by V , subject to the relations∧ vw + wv = 0. We will write φ = k if φ k(V ). The exterior algebra is commutative | | ∈∧ (in the graded sense). That is, for φ k1 (V ) and φ k2 (V ), 1 ∈∧ 2 ∈∧ [φ , φ ] := φ φ + ( 1)k1k2 φ φ = 0. 1 2 1 2 − 2 1 k If V has finite dimension, with basis e1,...,en, the space (V ) has basis ∧ e = e e I i1 · · · ik for all ordered subsets I = i1,...,ik of 1,...,n . (If k = 0, we put { } k { n } e = 1.) In particular, we see that dim (V )= k , and ∅ ∧ n n dim (V )= = 2n.
    [Show full text]
  • Section 5 Summary
    Section 5 summary Brian Krummel November 4, 2019 1 Terminology Eigenvalue Eigenvector Eigenspace Characteristic polynomial Characteristic equation Similar matrices Similarity transformation Diagonalizable Matrix of a linear transformation relative to a basis System of differential equations Solution to a system of differential equations Solution set of a system of differential equations Fundamental solutions Initial value problem Trajectory Decoupled system of differential equations Repeller / source Attractor / sink Saddle point Spiral point Ellipse trajectory 1 2 How to find eigenvalues, eigenvectors, and a similar ma- trix 1. First we find eigenvalues of A by finding the roots of the characteristic equation det(A − λI) = 0: Note that if A is upper triangular (or lower triangular), then the eigenvalues λ of A are just the diagonal entries of A and the multiplicity of each eigenvalue λ is the number of times λ appears as a diagonal entry of A. 2. For each eigenvalue λ, find a basis of eigenvectors corresponding to the eigenvalue λ by row reducing A − λI and find the solution to (A − λI)x = 0 in vector parametric form. Note that for a general n × n matrix A and any eigenvalue λ of A, dim(Eigenspace corresp. to λ) = # free variables of (A − λI) ≤ algebraic multiplicity of λ. 3. Assuming A has only real eigenvalues, check if A is diagonalizable. If A has n distinct eigenvalues, then A is definitely diagonalizable. More generally, if dim(Eigenspace corresp. to λ) = algebraic multiplicity of λ for each eigenvalue λ of A, then A is diagonalizable and Rn has a basis consisting of n eigenvectors of A.
    [Show full text]
  • Diagonalizable Matrix - Wikipedia, the Free Encyclopedia
    Diagonalizable matrix - Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/Matrix_diagonalization Diagonalizable matrix From Wikipedia, the free encyclopedia (Redirected from Matrix diagonalization) In linear algebra, a square matrix A is called diagonalizable if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P such that P −1AP is a diagonal matrix. If V is a finite-dimensional vector space, then a linear map T : V → V is called diagonalizable if there exists a basis of V with respect to which T is represented by a diagonal matrix. Diagonalization is the process of finding a corresponding diagonal matrix for a diagonalizable matrix or linear map.[1] A square matrix which is not diagonalizable is called defective. Diagonalizable matrices and maps are of interest because diagonal matrices are especially easy to handle: their eigenvalues and eigenvectors are known and one can raise a diagonal matrix to a power by simply raising the diagonal entries to that same power. Geometrically, a diagonalizable matrix is an inhomogeneous dilation (or anisotropic scaling) — it scales the space, as does a homogeneous dilation, but by a different factor in each direction, determined by the scale factors on each axis (diagonal entries). Contents 1 Characterisation 2 Diagonalization 3 Simultaneous diagonalization 4 Examples 4.1 Diagonalizable matrices 4.2 Matrices that are not diagonalizable 4.3 How to diagonalize a matrix 4.3.1 Alternative Method 5 An application 5.1 Particular application 6 Quantum mechanical application 7 See also 8 Notes 9 References 10 External links Characterisation The fundamental fact about diagonalizable maps and matrices is expressed by the following: An n-by-n matrix A over the field F is diagonalizable if and only if the sum of the dimensions of its eigenspaces is equal to n, which is the case if and only if there exists a basis of Fn consisting of eigenvectors of A.
    [Show full text]
  • EIGENVALUES and EIGENVECTORS 1. Diagonalizable Linear Transformations and Matrices Recall, a Matrix, D, Is Diagonal If It Is
    EIGENVALUES AND EIGENVECTORS 1. Diagonalizable linear transformations and matrices Recall, a matrix, D, is diagonal if it is square and the only non-zero entries are on the diagonal. This is equivalent to D~ei = λi~ei where here ~ei are the standard n n vector and the λi are the diagonal entries. A linear transformation, T : R ! R , is n diagonalizable if there is a basis B of R so that [T ]B is diagonal. This means [T ] is n×n similar to the diagonal matrix [T ]B. Similarly, a matrix A 2 R is diagonalizable if it is similar to some diagonal matrix D. To diagonalize a linear transformation is to find a basis B so that [T ]B is diagonal. To diagonalize a square matrix is to find an invertible S so that S−1AS = D is diagonal. Fix a matrix A 2 Rn×n We say a vector ~v 2 Rn is an eigenvector if (1) ~v 6= 0. (2) A~v = λ~v for some scalar λ 2 R. The scalar λ is the eigenvalue associated to ~v or just an eigenvalue of A. Geo- metrically, A~v is parallel to ~v and the eigenvalue, λ. counts the stretching factor. Another way to think about this is that the line L := span(~v) is left invariant by multiplication by A. n An eigenbasis of A is a basis, B = (~v1; : : : ;~vn) of R so that each ~vi is an eigenvector of A. Theorem 1.1. The matrix A is diagonalizable if and only if there is an eigenbasis of A.
    [Show full text]
  • Math 1553 Introduction to Linear Algebra
    Announcements Monday, November 5 I The third midterm is on Friday, November 16. I That is one week from this Friday. I The exam covers xx4.5, 5.1, 5.2. 5.3, 6.1, 6.2, 6.4, 6.5. I WeBWorK 6.1, 6.2 are due Wednesday at 11:59pm. I The quiz on Friday covers xx6.1, 6.2. I My office is Skiles 244 and Rabinoffice hours are: Mondays, 12{1pm; Wednesdays, 1{3pm. Section 6.4 Diagonalization Motivation Difference equations Many real-word linear algebra problems have the form: 2 3 n v1 = Av0; v2 = Av1 = A v0; v3 = Av2 = A v0;::: vn = Avn−1 = A v0: This is called a difference equation. Our toy example about rabbit populations had this form. The question is, what happens to vn as n ! 1? I Taking powers of diagonal matrices is easy! I Taking powers of diagonalizable matrices is still easy! I Diagonalizing a matrix is an eigenvalue problem. 2 0 4 0 8 0 2n 0 D = ; D2 = ; D3 = ;::: Dn = : 0 −1 0 1 0 −1 0 (−1)n 0 −1 0 0 1 0 1 0 0 1 0 −1 0 0 1 1 2 1 3 1 D = @ 0 2 0 A ; D = @ 0 4 0 A ; D = @ 0 8 0 A ; 1 1 1 0 0 3 0 0 9 0 0 27 0 (−1)n 0 0 1 n 1 ::: D = @ 0 2n 0 A 1 0 0 3n Powers of Diagonal Matrices If D is diagonal, then Dn is also diagonal; its diagonal entries are the nth powers of the diagonal entries of D: (CDC −1)(CDC −1) = CD(C −1C)DC −1 = CDIDC −1 = CD2C −1 (CDC −1)(CD2C −1) = CD(C −1C)D2C −1 = CDID2C −1 = CD3C −1 CDnC −1 Closed formula in terms of n: easy to compute 1 1 2n 0 1 −1 −1 1 2n + (−1)n 2n + (−1)n+1 = : 1 −1 0 (−1)n −2 −1 1 2 2n + (−1)n+1 2n + (−1)n Powers of Matrices that are Similar to Diagonal Ones What if A is not diagonal? Example 1=2 3=2 Let A = .
    [Show full text]