Eigenvalues, Diagonalization, and the Jordan Canonical Form 1 4.1 Eigenvalues, Eigenvectors, and the Characteristic Polynomial

Eigenvalues, Diagonalization, and the Jordan Canonical Form 1 4.1 Eigenvalues, Eigenvectors, and the Characteristic Polynomial

Linear Algebra (part 4): Eigenvalues, Diagonalization, and the Jordan Form (by Evan Dummit, 2020, v. 2.00) Contents 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form 1 4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial . 1 4.1.1 Eigenvalues and Eigenvectors . 2 4.1.2 Eigenvalues and Eigenvectors of Matrices . 3 4.1.3 Eigenspaces . 6 4.2 Diagonalization . 10 4.3 Generalized Eigenvectors and the Jordan Canonical Form . 15 4.3.1 Generalized Eigenvectors . 16 4.3.2 The Jordan Canonical Form . 20 4.4 Applications of Diagonalization and the Jordan Canonical Form . 24 4.4.1 Spectral Mapping and the Cayley-Hamilton Theorem . 24 4.4.2 Transition Matrices and Incidence Matrices . 24 4.4.3 Systems of Linear Dierential Equations . 27 4.4.4 Matrix Exponentials and the Jordan Form . 29 4.4.5 The Spectral Theorem for Hermitian Operators . 31 4 Eigenvalues, Diagonalization, and the Jordan Canonical Form In this chapter, we will discuss eigenvalues and eigenvectors: these are characteristic values (and vectors) associated to a linear operator T : V ! V that will allow us to study T in a particularly convenient way. Our ultimate goal is to describe methods for nding a basis for V such that the associated matrix for T has an especially simple form. We will rst describe diagonalization, the procedure for (trying to) nd a basis such that the associated matrix for T is a diagonal matrix, and characterize the linear operators that are diagonalizable. Unfortunately, not all linear operators are diagonalizable, so we will then discuss a method for computing the Jordan canonical form of matrix, which is the representation that is as close to a diagonal matrix as possible. We close with a few applications of the Jordan canonical form, including a proof of the Cayley-Hamilton theorem that any matrix satises its characteristic polynomial. 4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial • Suppose that we have a linear transformation T : V ! V from a (nite-dimensional) vector space V to itself. We would like to determine whether there exists a basis of such that the associated matrix β is a β V [T ]β diagonal matrix. ◦ Ultimately, our reason for asking this question is that we would like to describe T in as simple a way as possible, and it is unlikely we could hope for anything simpler than a diagonal matrix. So suppose that and the diagonal entries of β are . ◦ β = fv1;:::; vng [T ]β fλ1; : : : ; λng 1 ◦ Then, by assumption, we have T (vi) = λivi for each 1 ≤ i ≤ n: the linear transformation T behaves like scalar multiplication by λi on the vector vi. ◦ Conversely, if we were able to nd a basis β of V such that T (vi) = λivi for some scalars λi, with , then the associated matrix β would be a diagonal matrix. 1 ≤ i ≤ n [T ]β ◦ This suggests we should study vectors v such that T (v) = λv for some scalar λ. 4.1.1 Eigenvalues and Eigenvectors • Denition: If T : V ! V is a linear transformation, a nonzero vector v with T (v) = λv is called an eigenvector of T , and the corresponding scalar λ is called an eigenvalue of T . ◦ Important note: We do not consider the zero vector 0 an eigenvector. (The reason for this convention is to ensure that if v is an eigenvector, then its corresponding eigenvalue λ is unique.) ◦ Note also that (implicitly) λ must be an element of the scalar eld of V , since otherwise λv does not make sense. ◦ When V is a vector space of functions, we often use the word eigenfunction in place of eigenvector. • Here are a few examples of linear transformations and eigenvectors: ◦ Example: If T : R2 ! R2 is the map with T (x; y) = h2x + 3y; x + 4yi, then the vector v = h3; −1i is an eigenvector of T with eigenvalue 1, since T (v) = h3; −1i = v. ◦ Example: If T : C2 ! C2 is the map with T (x; y) = h2x + 3y; x + 4yi, the vector w = h1; 1i is an eigenvector of T with eigenvalue 5, since T (w) = h5; 5i = 5w. 1 1 ◦ Example: If T : M ( ) ! M ( ) is the transpose map, then the matrix is an eigenvector 2×2 R 2×2 R 1 3 of T with eigenvalue 1. 0 −2 ◦ Example: If T : M ( ) ! M ( ) is the transpose map, then the matrix is an eigenvector 2×2 R 2×2 R 2 0 of T with eigenvalue −1. ◦ Example: If T : P (R) ! P (R) is the map with T (f(x)) = xf 0(x), then for any integer n ≥ 0, the polynomial xn is an eigenfunction of T with eigenvalue n, since T (xn) = x · nxn−1 = nxn. ◦ Example: If V is the space of innitely-dierentiable functions and D : V ! V is the dierentiation operator, the function f(x) = erx is an eigenfunction with eigenvalue r, for any real number r, since D(erx) = rerx. ◦ Example: If T : V ! V is any linear transformation and v is a nonzero vector in ker(T ), then v is an eigenvector of V with eigenvalue 0. In fact, the eigenvectors with eigenvalue 0 are precisely the nonzero vectors in ker(T ). • Finding eigenvectors is a generalization of computing the kernel of a linear transformation, but, in fact, we can reduce the problem of nding eigenvectors to that of computing the kernel of a related linear transformation: • Proposition (Eigenvalue Criterion): If T : V ! V is a linear transformation, the nonzero vector v is an eigenvector of T with eigenvalue λ if and only if v is in ker(λI − T ) = ker(T − λI), where I is the identity transformation on V . ◦ This criterion reduces the computation of eigenvectors to that of computing the kernel of a collection of linear transformations. ◦ Proof: Assume v 6= 0. Then v is an eigenvalue of T with eigenvalue λ () T (v) = λv () (λI)v − T (v) = 0 () (λI − T )(v) = 0 () v is in the kernel of λI − T . The equivalence ker(λI − T ) = ker(T − λI) is also immediate. • We will remark that some linear operators may have no eigenvectors at all. Example: If is the integration operator x , show that has no eigenvectors. • I : R[x] ! R[x] I(p) = 0 p(t) dt I ´ 2 Suppose that , so that x . ◦ I(p) = λp 0 p(t) dt = λp(x) ◦ Then, dierentiating both sides´ with respect to x and applying the fundamental theorem of calculus yields p(x) = λp0(x). ◦ If p had positive degree n, then λp0(x) would have degree at most n − 1, so it could not equal p(x). ◦ Thus, p must be a constant polynomial. But the only constant polynomial with I(p) = λp is the zero polynomial, which is by denition not an eigenvector. Thus, I has no eigenvectors. • In other cases, the existence of eigenvectors may depend on the scalar eld being used. • Example: Show that T : F 2 ! F 2 dened by T (x; y) = hy; −xi has no eigenvectors when F = R, but does have eigenvectors when F = C. ◦ If T (x; y) = λ hx; yi, we get y = λx and −x = λy, so that (λ2 + 1)y = 0. ◦ If y were zero then x = −λy would also be zero, impossible. Thus y 6= 0 and so λ2 + 1 = 0. ◦ When F = R there is no such scalar λ, so there are no eigenvectors in this case. ◦ However, when F = C, we get λ = ±i, and then the eigenvectors are hx; −ixi with eigenvalue i and hx; ixi with eigenvalue −i. • Computing eigenvectors of general linear transformations on innite-dimensional spaces can be quite dicult. ◦ For example, if V is the space of innitely-dierentiable functions, then computing the eigenvectors of the map T : V ! V with T (f) = f 00 + xf 0 requires solving the dierential equation f 00 + xf 0 = λf for an arbitrary λ. ◦ It is quite hard to solve that particular dierential equation for a general λ (at least, without resorting to using an innite series expansion to describe the solutions), and the solutions for most values of λ are non-elementary functions. • In the nite-dimensional case, however, we can recast everything using matrices. • Proposition (Eigenvalues and Matrices): Suppose V is a nite-dimensional vector space with ordered basis β and that T : V ! V is linear. Then v is an eigenvector of T with eigenvalue λ if and only if [v]β is an eigenvector of left-multiplication by β with eigenvalue . [T ]β λ ◦ Proof: Note that v 6= 0 if and only if [v]β 6= 0, so now assume v 6= 0. ◦ Then v is an eigenvector of T with eigenvalue λ () T (v) = λv () [T (v)]β = [λv]β () β is an eigenvector of left-multiplication by β with eigenvalue . [T ]β[v]β = λ[v]β () [v]β [T ]β λ 4.1.2 Eigenvalues and Eigenvectors of Matrices • We will now study eigenvalues and eigenvectors of matrices. For convenience, we restate the denition for this setting: • Denition: For A an n × n matrix, a nonzero vector x with Ax = λx is called1 an eigenvector of A, and the corresponding scalar λ is called an eigenvalue of A. 2 3 3 ◦ Example: If A = , the vector x = is an eigenvector of A with eigenvalue 1, because 1 4 −1 2 3 3 3 Ax = = = x. 1 4 −1 −1 1Technically, such a vector x is a right eigenvector of A: this stands in contrast to a vector y with yA = λy, which is called a left eigenvector of A.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    32 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us