
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied, with or without permission from the author. However, this document may not be kept on any information storage and retrieval system without permission from the author, unless such system is not accessible to any individuals other than its owners. Chapter 10 ORTHOGONAL MATRICES 10.1. Introduction Definition. A square matrix A with real entries and satisfying the condition A−1 = At is called an orthogonal matrix. Example 10.1.1. Consider the euclidean space R2 with the euclidean inner product. The vectors u1 =(1, 0) and u2 =(0, 1) form an orthonormal basis B = {u1, u2}. Let us now rotate u1 and u2 anticlockwise by an angle θ to obtain v1 = (cos θ, sin θ) and v2 =(− sin θ, cos θ). Then C = {v1, v2} is also an orthonormal basis. u2 v2 v1 θ u1 Chapter 10 : Orthogonal Matrices page 1 of 11 <02> Linear Algebra c W W L Chen, 1997, 2008 The transition matrix from the basis C to the basis B is given by cos θ − sin θ P =([v ]B [v ]B )= . 1 2 sin θ cos θ Clearly cos θ sin θ P −1 = P t = . − sin θ cos θ In fact, our example is a special case of the following general result. PROPOSITION 10A. Suppose that B = {u1,...,un} and C = {v1,...,vn} are two orthonormal bases of a real inner product space V . Then the transition matrix P from the basis C to the basis B is an orthogonal matrix. Example 10.1.2. The matrix ⎛ ⎞ 1/3 −2/32/3 A = ⎝ 2/3 −1/3 −2/3 ⎠ 2/32/31/3 is orthogonal, since ⎛ ⎞ ⎛ ⎞ ⎛ ⎞ 1/32/32/3 1/3 −2/32/3 100 AtA = ⎝ −2/3 −1/32/3 ⎠ ⎝ 2/3 −1/3 −2/3 ⎠ = ⎝ 010⎠ . 2/3 −2/31/3 2/32/31/3 001 Note also that the row vectors of A, namely (1/3, −2/3, 2/3), (2/3, −1/3, −2/3) and (2/3, 2/3, 1/3) are orthonormal. So are the column vectors of A. In fact, our last observation is not a coincidence. PROPOSITION 10B. Suppose that A is an n × n matrix with real entries. Then (a) A is orthogonal if and only if the row vectors of A form an orthonormal basis of Rn under the euclidean inner product; and (b) A is orthogonal if and only if the column vectors of A form an orthonormal basis of Rn under the euclidean inner product. Proof. We shall only prove (a), since the proof of (b) is almost identical. Let r1,...,rn denote the row vectors of A. Then ⎛ ⎞ r1 · r1 ... r1 · rn t ⎝ . ⎠ AA = . rn · r1 ... rn · rn It follows that AAt = I if and only if for every i, j =1,...,n,wehave 1ifi = j, ri · rj = 0ifi = j, if and only if r1,...,rn are orthonormal. PROPOSITION 10C. Suppose that A is an n × n matrix with real entries. Suppose further that the inner product in Rn is the euclidean inner product. Then the following are equivalent: (a) A is orthogonal. (b) For every x ∈ Rn, we have Ax = x. (c) For every u, v ∈ Rn, we have Au · Av = u · v. Chapter 10 : Orthogonal Matrices page 2 of 11 <03> Linear Algebra c W W L Chen, 1997, 2008 Proof. ((a)⇒(b)) Suppose that A is orthogonal, so that AtA = I. It follows that for every x ∈ Rn,we have Ax2 = Ax · Ax = xtAtAx = xtIx = xtx = x · x = x2. ((b)⇒(c)) Suppose that Ax = x for every x ∈ Rn. Then for every u, v ∈ Rn,wehave A ·A 1 A A 2 − 1 A −A 2 1 A 2 − 1 A − 2 1 2 − 1 − 2 · . u v = 4 u+ v 4 u v = 4 (u+v) 4 (u v) = 4 u+v 4 u v = u v ((c)⇒(a)) Suppose that Au · Av = u · v for every u, v ∈ Rn. Then Iu · v = u · v = Au · Av = vtAtAu = AtAu · v, so that (AtA − I)u · v =0. In particular, this holds when v =(AtA − I)u, so that (AtA − I)u · (AtA − I)u =0, whence (AtA − I)u = 0, (1) in view of Proposition 9A(d). But then (1) is a system of n homogeneous linear equations in n unknowns satisfied by every u ∈ Rn. Hence the coefficient matrix AtA−I must be the zero matrix, and so AtA = I. Proof of Proposition 10A. For every u ∈ V , we can write u = β1u1 + ...+ βnun = γ1v1 + ...+ γnvn, where β1,...,βn,γ1,...,γn ∈ R, and where B = {u1,...,un} and C = {v1,...,vn} are two orthonormal bases of V . Then n n n 2 2 u = u, u = β1u1 + ...+ βnun,β1u1 + ...+ βnun = βiβjui, uj = βi i=1 j=1 i=1 =(β1,...,βn) · (β1,...,βn). Similarly, n n n 2 2 u = u, u = γ1v1 + ...+ γnvn,γ1v1 + ...+ γnvn = γiγjvi, vj = γi i=1 j=1 i=1 =(γ1,...,γn) · (γ1,...,γn). n It follows that in R with the euclidean norm, we have [u]B = [u]C, and so P [u]C = [u]C for every u ∈ V . Hence P x = x holds for every x ∈ Rn. It now follows from Proposition 10C that P is orthogonal. Chapter 10 : Orthogonal Matrices page 3 of 11 <04> Linear Algebra c W W L Chen, 1997, 2008 10.2. Eigenvalues and Eigenvectors In this section, we give a brief review on eigenvalues and eigenvectors first discussed in Chapter 7. Suppose that ⎛ ⎞ a11 ... a1n ⎝ . ⎠ A = . an1 ... ann is an n × n matrix with real entries. Suppose further that there exist a number λ ∈ R and a non-zero vector v ∈ Rn such that Av = λv. Then we say that λ is an eigenvalue of the matrix A, and that v is an eigenvector corresponding to the eigenvalue λ. In this case, we have Av = λv = λIv, where I is the n × n identity matrix, so that (A − λI)v = 0. Since v ∈ Rn is non-zero, it follows that we must have det(A − λI)=0. (2) In other words, we must have ⎛ ⎞ a11 − λa12 ... a1n ⎜ a a − λa⎟ ⎜ 21 22 2n ⎟ det ⎝ . ⎠ =0. .. an1 an2 ... ann − λ Note that (2) is a polynomial equation. The polynomial det(A−λI) is called the characteristic polynomial of the matrix A. Solving this equation (2) gives the eigenvalues of the matrix A. On the other hand, for any eigenvalue λ of the matrix A, the set {v ∈ Rn :(A − λI)v = 0} (3) is the nullspace of the matrix A−λI, and forms a subspace of Rn. This space (3) is called the eigenspace corresponding to the eigenvalue λ. Suppose now that A has eigenvalues λ1,...,λn ∈ R, not necessarily distinct, with corresponding n eigenvectors v1,...,vn ∈ R , and that v1,...,vn are linearly independent. Then it can be shown that P −1AP = D, where ⎛ ⎞ λ1 ⎝ . ⎠ P =(v1 ... vn ) and D = .. λn In fact, we say that A is diagonalizable if there exists an invertible matrix P with real entries such that P −1AP is a diagonal matrix with real entries. It follows that A is diagonalizable if its eigenvectors form a basis of Rn. In the opposite direction, one can show that if A is diagonalizable, then it has n linearly independent eigenvectors in Rn. It therefore follows that the question of diagonalizing a matrix A with real entries is reduced to one of linear independence of its eigenvectors. We now summarize our discussion so far. Chapter 10 : Orthogonal Matrices page 4 of 11 <05> Linear Algebra c W W L Chen, 1997, 2008 DIAGONALIZATION PROCESS. Suppose that A is an n × n matrix with real entries. (1) Determine whether the n roots of the characteristic polynomial det(A − λI) are real. (2) If not, then A is not diagonalizable. If so, then find the eigenvectors corresponding to these eigen- values. Determine whether we can find n linearly independent eigenvectors. (3) If not, then A is not diagonalizable. If so, then write ⎛ ⎞ λ1 ⎝ . ⎠ P =(v1 ... vn ) and D = .. , λn n where λ1,...,λn ∈ R are the eigenvalues of A and where v1,...,vn ∈ R are respectively their corresponding eigenvectors. Then P −1AP = D. In particular, it can be shown that if A has distinct eigenvalues λ1,...,λn ∈ R, with corresponding n eigenvectors v1,...,vn ∈ R , then v1,...,vn are linearly independent. It follows that all such matrices A are diagonalizable. 10.3. Orthonormal Diagonalization We now consider the euclidean space Rn an as inner product space with the euclidean inner product. Given any n × n matrix A with real entries, we wish to find out whether there exists an orthonormal basis of Rn consisting of eigenvectors of A. Recall that in the Diagonalization process discussed in the last section, the columns of the matrix P are eigenvectors of A, and these vectors form a basis of Rn. It follows from Proposition 10B that this basis is orthonormal if and only if the matrix P is orthogonal. Definition. An n × n matrix A with real entries is said to be orthogonally diagonalizable if there exists an orthogonal matrix P with real entries such that P −1AP = P tAP is a diagonal matrix with real entries. First of all, we would like to determine which matrices are orthogonally diagonalizable. For those that are, we then need to discuss how we may find an orthogonal matrix P to carry out the diagonalization.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages11 Page
-
File Size-