Change of Basis Consider a Linear Transform, PB , and Its Inverse, P B

Change of Basis Consider a Linear Transform, PB , and Its Inverse, P B

Change of Basis Consider a linear transform, PB, and its −1 inverse, PB , which map a vector back and forth between its representation in the standard basis and its representation in the basis, B: PB −→ u ← [u]B . −1 PB Change of Basis (contd). Let B consist of N basis vectors, b1 ...bN. Since [u]B is the representation of u in B, it follows that ... u = ([u]B)1 b1 +([u]B)2 b2 + ([u]B)N bN But this is just the matrix vector prod- uct u = B[u]B where B = b1 b2 ... bN . −1 −1 We see that PB = B and PB = B. Similarity Transforms Now consider a linear transformation rep- resented in the standard basis by the ma- trix A. We seek [A]B, i.e., the represen- tation of the corresponding linear trans- formation in the basis B: u −→A Au ↑ B ↓ B−1 [A]B [u]B −→ [Au]B The matrix we seek maps [u]B into [Au]B. From the above diagram, we see that this matrix is the composition of B, A, and B−1: −1 [A]B = B AB. We say that A and [A]B are related by a similarity transform. Diag. of Symmetric Matrices Because of linearity, one might expect that a transformation will have an espe- cially simple representation in the ba- sis of its eigenvectors, X . Let A be its representation in the standard basis and let the columns of X be the eigenvec- tors of A. Then X and XT = X−1 take us back and forth between the standard basis and X : XT −→ u ←− [u]X . X Diag. of Symmetric Matrices (contd.) The matrix we seek maps [u]X into [Au]X : u −→A Au ↑ X ↓ XT [A]X [u]X −→ [Au]X From the above diagram, we see that this matrix is the composition of X, A, and XT: Λ = XTAX. We observe that Λ is diagonal with Λii = λi, the eigenvalue of A associated with eigenvector, xi. Spectral Thm. for Sym. Matrices Any symmetric N × N matrix, A, with N distinct eigenvalues, can be factored as follows: A = XΛXT where Λ is N × N and diagonal, X and XT are N ×N matrices, and the i-th col- umn of X (equal to the i-th row of XT) is an eigenvector of A: λixi = Axi with eigenvalue Λii = λi. Note that xi is orthogonal to x j when i =6 j: 1 if i = j XXT = δ = ij ij 0 otherwise. In other words, XXT = I. Consequently, XT = X−1. Spectral Thm. for Sym. Matrices (contd). Using the definition of matrix product and the fact that Λ is diagonal, we can write A = XΛXT as N Λ T . (A)ij = ∑ (X)ik kk X kj k=1 Since X = x1 x2 ... xN and Λkk = λk N λ (A)ij = ∑ (xk)i k (xk) j k=1 N λ T = ∑ kxkxk ij k=1 N λ T A = ∑ kxkxk k=1 where λkxk = Axk. Spectral Thm. for Sym. Matrices (contd). The spectral factorization of A is: λ T λ T λ T . A = 1x1x1 + 2x2x2 + ··· + NxNxN λ T Note that each nxnxn is a rank one ma- λ T trix. Let Ai = ixixi . Now, because T xi xi = 1: λ λ T ixi = ixixi xi = Aixi i.e., xi is the only eigenvector of Ai and its only eigenvalue is λi. Diag. of Non-symmetric Matrices The situation is more complex when the transformation is represented by a non- symmetric matrix, P. Let the columns of X be P’s right eigenvectors and the rows of YT be its left eigenvectors. Then X and YT = X−1 take us back and forth between the standard basis and X : YT −→ u ←− [u]X . X Diag. of Non-symmetric Matrices (contd.) The matrix we seek maps [u]X into [Pu]X : u −→P Pu ↑ X ↓ YT Λ [u]X −→ [Pu]X From the above diagram, we see that this matrix is the composition of X, P, and YT: Λ = YTPX We observe that Λ is diagonal with Λii = λi, the eigenvalue of P associated with right eigenvector, xi, and left eigenvec- tor, yi. Spectral Theorem Any N × N matrix, P, with N distinct eigenvalues, can be factored as follows: P = XΛYT where Λ is N × N and diagonal, X and YT are N ×N matrices, and the i-th col- umn of X is a right eigenvector of P: λixi = Pxi with eigenvalue Λii = λi and the i-th row of YT is a left eigenvector of P: λ T T iyi = yi P with the same eigenvalue. Spectral Theorem (contd.) Note that xi is orthogonal to y j when i =6 j: 1 if i = j XYT = δ = ij ij 0 otherwise. In other words, XYT = I. Consequently, YT = X−1. Spectral Theorem (contd). The spectral factorization of P is: λ T λ T λ T . P = 1x1y1 + 2x2y2 + ··· + NxNyN λ T Note that each nxnyn is a rank one ma- λ T trix. Let Pi = ixiyi . Now, because T yi xi = 1: λ λ T ixi = ixiyi xi = Pixi and λ T T λ T iyi = yi ixiyi T = yi Pi i.e., xi and yi are the sole right and left eigenvectors of Pi. The only eigenvalue is λi. Stochastic Matrices If P is stochastic, then 1 = ∑Pij. i Let yT = 1 1 ... 1 . It follows that T T . y j = ∑ y i Pij i Consequently, yT is a left eigenvector with unit eigenvalue of every stochastic matrix: yT = yTP. Stochastic Matrices (contd.) What is the representation of an arbi- trary distribution, z, in the basis of eigen- vectors of an arbitrary stochastic ma- trix, P? z = c1x1 + c2x1 + ··· + cNxN = Xc. Solving for c: c = X−1z = YTz. T ... Since y1 = 1 1 1 , it follows that: T c1 = ∑Y1 j z j = ∑z j = 1 j j which is independent of the specific z and P!.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    15 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us