Unit 5: Matrix Diagonalization

Unit 5: Matrix Diagonalization

Unit 5: Matrix diagonalization Juan Luis Melero and Eduardo Eyras October 2018 1 Contents 1 Matrix diagonalization3 1.1 Definitions.............................3 1.1.1 Similar matrix.......................3 1.1.2 Diagonizable matrix...................3 1.1.3 Eigenvalues and Eigenvectors..............3 1.2 Calculation of eigenvalues and eigenvectors...........4 2 Properties of matrix diagonalization7 2.1 Similar matrices have the same eigenvalues...........7 2.2 Relation between the rank and the eigenvalues of a matrix..8 2.3 Eigenvalues are linearly independent..............8 2.4 A matrix is diagonalizable if and only if has n linearly inde- pendent eigenvalues........................9 2.5 Eigenvectors of a symmetric matrix are orthogonal...... 11 3 Matrix diagonalization process example 12 4 Exercises 14 5 R practical 15 5.1 Eigenvalues and Eigenvectors.................. 15 2 1 Matrix diagonalization 1.1 Definitions 1.1.1 Similar matrix Two matrices are called similar if they are related through a third matrix in the following way: −1 A; B 2 Mn×n(R) are similar if 9P 2 Mn×n(R) invertible j A = P BP Note that two similar matrices have the same determinant. Proof: Given A; B similar: A = P −1BP 1 det(A) = det(P −1BP ) = det(P −1)det(B)det(P ) = det(B)det(P ) = det(B) det(P ) 1.1.2 Diagonizable matrix A matrix is diagonalizable if it is similar to a diagonal matrix, i.e.: −1 A 2 Mn×n(R) is diagonalizable if 9P 2 Mn×n(R) invertible j P AP is diagonal P is the matrix of change to a basis where A has a diagonal form. We will say that A is diagonalizable (or diagonalizes) if and only if there is a basis B = fu1; : : : ; ung with the property: Au1 = λ1u1 . with λ1; : : : ; λn 2 R Aun = λnun That is, A has diagonal form in this basis, and consequently, A has diagonal form on every vector of the vector space in this basis. 1.1.3 Eigenvalues and Eigenvectors Let A be a square matrix A 2 Mn×n(R), λ 2 R is an eigenvalue of A if, for some vector u, Au = λu 3 We can rewrite this as a system of equations: Au = λu ! (λIn − A)u = 0 or (A − λIn)u = 0 We can find non-trivial solutons in this homogeneous system of equations if the determinant is zero. 1.2 Calculation of eigenvalues and eigenvectors Let A be a square matrix A 2 Mn×n(R), λ 2 R is an eigenvalue of A () det(λIn − A) = 0 . A vector u is an eigenvector of λ () (λIn − A)u = 0. First, we calculate the eigenvalues and afterwards, the eigenvectors. To compute eigenvalues we solve the equation: n n−1 2 0 = det(λIn − A) = λ + αn−1λ + ··· + α2λ + α1λ + α0 Thus, each eigenvalue λi is a root of this polynomial (the characteristic poly- nomial). To compute the eigenvectors, we solve the linear equation for each eigenvalue: (λiIn − A)u = 0 The set of solutions for a given eigenvalue is called the Eigenspace of A corresponding to the eigenvalue λ: E(λ) = fu j (λIn − A)u = 0g Note that E(λ) is the kernel of a linear map (we leave as exercise to show that λIn − A is a linear map): E(λ) = Ker(λIn − A) Since the kernel of a linear map is a vector subspace, the eigenspace is a vector subspace. Given a square matrix representing a linear map from a vector space to itself (endomorphism), the eigenvectors describe the subspaces in which the matrix works as a multiplication by a number (the eigenvalues), i.e. the vectors on 4 which matrix diagonalizes. Example in R2. Consider the matrix in 2 dimensions: −3 4 A = −1 2 To diagonalize this matrix we write the characteristic equation: λ + 3 −4 det(λI − A) = det = (λ + 3)(λ − 2) + 4 = 0 2 1 λ − 2 λ2 + λ − 2 = 0 ! (λ + 2)(λ − 1) = 0 ! λ = −2; 1 The eigenvalues of this matrix are −2 and 1. Now we calculate the eigenvec- tors for each eigenvalue by solving the homogeneous linear equations for the components of the vectors. For eigenvector λ = −2. −2 + 3 −4 u1 0 (−2I2 − A)u = 0 ! = ! 1 −2 − 2 u2 0 u1 − 4u2 0 ! = ! u1 = 4u2 u1 − 4u2 0 Hence, the eigenspace is: a E(−2) = u = ; a 2 a=4 R 1 In particular, u = is an eigenvector with eigenvalue −2. 1=4 For eigenvalue λ = 1. 1 + 3 −4 u1 0 (I2 − A)u = 0 ! = ! 1 1 − 2 u2 0 4u1 − 4u2 0 ! = ! u1 = u2 u1 − u2 0 Hence the eigenspace has the form: 5 a E(1) = u = ; a 2 a R 1 In particular, u = is an eigenvector with eigenvalue 1. 1 Example in R3. Consider the following matrix: 0−5 0 01 A = @ 3 7 0A 4 −2 3 0λ + 5 0 0 1 det(λI3 − A) = det @ 3 λ − 7 0 A = (λ + 5)(λ − 7)(λ − 3) = 0 −4 2 λ − 3 3 solutions: λ = −5; 7; 3 Eigenvector for λ = −5: 0 1 0 1 0 1 0 16 1 0 0 0 x 0 − 9 z 4 (−5I3 − A)u = 0 ! @−3 −12 0 A @yA = @0A ! u = @ 9 z A −4 2 −8 z 0 z The eigenspace is: 8 0x1 9 < 16 4 = E(−5) = u = y ; x = − z; z; z; z 2 @ A 9 9 R : z ; Eigenvectors for λ = 7: 0 12 0 01 0x1 001 0 0 1 (7I3 − A)u = 0 ! @−3 0 0A @yA = @0A ! u = @−2zA −4 2 4 z 0 z The eigenspace is: 8 0 1 9 < 0 = E(7) = u = @−2zA ; z 2 R : z ; 6 Eigenvector for λ = 3: 0 8 0 01 0x1 001 001 (3I3 − A)u = 0 ! @−3 −4 0A @yA = @0A ! u = @0A −4 2 0 z 0 z The eigenspace is: 8 0 1 9 < 0 = E(3) = u = @0A ; z 2 R : z ; 2 Properties of matrix diagonalization In this section we describe some of the properties of diagonalizable matrices. 2.1 Similar matrices have the same eigenvalues Theorem: A; B 2 Mn×n(R) similar =) A; B have the same eigenvalues Proof: Given two square matrices that are similar: −1 A; B 2 Mn×n(R);A = P BP The eigenvalues are calculated with the characteristic polynomial, that is: −1 −1 −1 det(λIn − A) = det(λP P − P BP ) = det(P (λIn − B)P ) = −1 det(P )det(λIn − B)det(P ) = det(λIn − B) Hence, two similar matrices have the same characteristic polynomial and therefore will have the same eigenvalues. This result also allows us to understand better the process of diagonalization. The determinant of a diagonal matrix is the product of the elements in its diagonal and the solution of the characteristic polynomials must be of the Q form (λ − λi) = 0, where λi are the eigenvalues. Thus, to diagonalize a matrix is to establish its similarity to a diagonal matrix containing its eigenvalues. 7 2.2 Relation between the rank and the eigenvalues of a matrix Recall that the rank of A matrix is the maximum number of linearly inde- pendent row or column vectors. Property: rank(A) = number of different non-zero eigenvalues of A. Proof: we defined a diagonalizable matrix A if it is similar to a diagonal matrix such that D = P −1AP and D is a diagonal matrix. As we saw in section 1.1.1, the determinant of two similar matrices is the same, therefore: D = P −1AP ! det(D) = det(A) We can see that a matrix is singular, i.e. has det(A) = 0, if at least one of its eigenvalues is zero. A the rank of a diagonal matrix is the number of non-zero rows, the rank of A is the number of non-zero eigenvalues. 2.3 Eigenvalues are linearly independent Theorem: the eigenvalues of a matrix are linearly independent. Proof: We prove this by contradiction, i.e. we assume the opposite and arrive to a contradiction. Consider the case of two non-zero eigenvectors for a 2 × 2 matrix A: u1 6= 0; u2 6= 0; Au1 = λ1u1; Au2 = λ2u2 We assume that they are linearly dependent: u1 = cu2 Now we apply the matrix A on both sides and use the fact that they are eigenvectors: λ1u1 = cλ2u2 = λ2u1 ! (λ1 − λ2)u1 = 0 As the eigenvalues are generally different, therefore u1 = 0, which is a con- tradiction, since we assumed that the eigenvectors are non-zero. Thus, if the eigenvalues are different, the eigenvectors are linearly independent. 8 For n eigenvectors: first, assume linear dependence n X u1 = αjuj j=2 Apply the matrix to both sides: n n n X X X λ1u1 = αjλjuj = λ1 αjuj ! (λ1 − λj)αjuj = 0 ! αj 6= 0; 8j j=2 j=2 j=2 For different eigenvalues λi 6= λj; i 6= j, necessarily all coefficients αj must be zero. As a result, the eigenvectors of a matrix with maximal rank (non zero eigen- values) form a basis of the vector space and diagonalize the matrix (see section 2.4). 2.4 A matrix is diagonalizable if and only if has n lin- early independent eigenvalues Theorem: A 2 Mn×n(R) is diagonalizable () A has n linearly independent eigenvec- tors. Proof: we have to prove both directions. 1. A diagonalizable =) n linearly independent eigenvectors. 2. n linearly independent vectors =) A diagonalizable. Proof of 1: assume A is diagonalizable. Then, we know it must be similar to a diagonal matrix: −1 9P 2 Mn×n(R) j P AP is diagonal We can write: 0 1 λ1 ::: 0 −1 B .

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    17 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us