84 Section 3.1 Chapter 3 Eigenvalues And

84 Section 3.1 Chapter 3 Eigenvalues And

84 SECTION 3.1 CHAPTER 3 EIGENVALUES AND EIGENVECTORS OF LINEAR OPERATORS 3.1 Eigenvalues and Eigenvectors § Eigenvalues and eigenvectors are associated with linear operators, not linear transformations in general. Linear operators map a vector space to itself, whereas linear transformations map one vector space to another. We shall see how important eigenvectors are in applications of linear algebra. Definition 3.1 An eigenvector v of a linear operator L on a vector space V is a nonzero vector v that satisfies the equation L(v) = λ v, (3.1) for some scalar λ. The value of λ is called the eigenvalue associated with the eigenvector v. Eigenvalue and eigenvector together make an eigenpair. Other terminology for eigenvalues are “latent values”, “proper values”, or “character- istic values”. In 2 and 3, eigenvectors corresponding to a nonzero eigenvalue are easy to visualize. They areG vectorsG that when mapped by L, do not change direction. They may change length (due to λ), and even be in the opposite direction to v (if λ < 0), but they cannot be in any other direction. In any other space, v is an eigenvector if it is mapped to a multiple of itself by L. In 2 and 3 wherein we can visualize vectors, the action of a linear operator may make eigenpairsG clear.G Three instances of this are the linear operators in Examples 2.11, 2.15, and 2.16. Example 3.1 What are eigenpairs of the linear operator in Example 2.16? Solution The linear operator reflects vectors in the xz-plane. Clearly any vector from the origin to a point in the xz-plane lies in the xz-plane, and its reflection in the xz-plane is itself. Hence, L(v) = v for any such vector. Consequently, every non-zero vector that lies in the xz-plane is an eigenvector with eigenvalue λ = 1. In addition, vectors perpendicular to the xz-plane reflect to the negative of themselves. Hence, for any such vector, L(v) = v. In other words, vectors perpendicular to the xz-plane are eigenvectors with eigenvalue λ =− 1. − • Example 3.2 What are eigenpairs of the linear operator of Example 2.15 when 0 <θ<π/2? Solution When vectors along the z-axis are rotated around the z-axis, they are un- changed. In other words, such vectors are eigenvectors with eigenvalue λ = 1. • Example 3.3 What are eigenpairs of the linear operator in Example 2.11? Solution The linear operator takes orthogonal components of vectors along the plane y = x. Clearly any vector from the origin to a point in the plane lies in the plane, and its orthogonal component is itself. Hence, L(v) = v for any such vector. Consequently, every non-zero vector that lies in the plane y = x is an eigenvector with eigenvalue λ = 1. In addition, the orthogonal component of any vector perpendicular to the plane y = x is the zero vector. Hence, for any such vector, L(v) = 0. In other words, vectors perpendicular to the plane are eigenvectors with eigenvalue λ = 0. • In general, it is impossible to spot eigenvectors of linear operators as we did in the last three examples. Matrices associated with linear operators provide an alternative. An eigenvalue λ and an eigenvector v of a linear operator L satisfy L(v) = λ v. When a matrix A can be associated with the linear operator, then L(v) = A v. In such cases, we can say that (λ, v) is an eigenpair for the linear operator if Av = λ v, (provided v = 0). (3.2) 6 SECTION 3.1 85 In this equation, v must be its column vector (or matrix) representation. If we were to equate components, we would obtain a system of nonlinear equations for λ and the components of v. Alternatively, suppose we rewrite the system in the following equivalent ways: A v = λIv (I an identity matrix) A v λIv = 0 (0 a zero matrix) − (A λI)v = 0 (3.3) − We can regard this as a system of linear, homogeneous equations in the components of v (where some of the coefficients involve the unknown λ). Such a system has nontrivial solutions if and only if the determinant of the coefficient matrix is zero, det(A λI)=0. − We have just established the following theorem. Theorem 3.1 When A is the matrix associated with a linear operator L, λ is an eigenvalue of L if, and only if, it satisfies the equation det(A λI)=0. (3.4) − Equation 3.4 is called the characteristic equation of the matrix A representing the linear operator; its left side is called the characteristic polynomial of matrix A. This certainly would be appropriate terminology were we to call λ a “characteristic” value. Char- acteristic polynomial and characteristic equation are the accepted terminology no matter what term is used for eigenvalues. Before making some general observations, we illustrate how the characteristic equation gives eigenvalues in the earlier examples of this section. The matrix associated with the linear operator of Example 2.16 is 1 0 0 A = 0 1 0 . 0− 0 1 Its eigenvalues are defined by the characteristic equation 1 λ 0 0 − 0 = det(A λI) = det 0 1 λ 0 = (1 λ)2(1 + λ). − 0− 0− 1 λ − − − Solutions are λ = 1. These were the eigenvalues found in Example 3.1. ± The matrix associated with the linear operator of Example 2.15 is cos θ sin θ 0 − A = sin θ cos θ 0 . 0 0 1 Eigenvalues are defined by the characteristic equation cos θ λ sin θ 0 − − 2 2 0 = det (A λI) = det sin θ cos θ λ 0 = (1 λ)[(cos θ λ) + sin θ]. − 0 01− λ − − − Clearly λ = 1 is an eigenvalue. In addition, we should set 0 = (cos θ λ)2 + sin2 θ = cos2 θ 2λ cos θ + λ2 + sin2 θ = λ2 2λ cos θ +1. − − − Solutions of this quadratic are 86 SECTION 3.1 2 cos θ √4 cos2 θ 4 λ = ± − = cos θ sin2 θ, 2 ± p− an impossibility. Thus, (cos θ λ)2 + sin2 θ = 0, and λ = 1 is the only eigenvalue, as was discovered in Example 3.2. − 6 The matrix associated with the linear operator of Example 3.3 is 1/2 1/2 0 A = 1/2 1/2 0 . 0 0 1 Its eigenvalues are defined by the characteristic equation 1/2 λ 1/2 0 − 2 0 = det(A λI) = det 1/2 1/2 λ 0 = λ(1 λ) . − 0 01− λ − − − Solutions are λ = 0 and λ = 1. These were the eigenvalues found in Example 3.3. Here is another example. Example 3.4 If the matrix associated with a linear operator on some 3-dimensional space is 5 3 3 − − A = 6 8 6 , −12 12 10 − − find eigenvalues of the operator. Solution Eigenvalues are given by the characteristic equation 5 λ 3 3 − − − 0 = det(A λI) = det 6 8 λ 6 . − −12 −12 10 λ − − − If we expand along the first row, 0=(5 λ)[(8 λ)( 10 λ)+72]+3[ 6( 10 λ) 72] 3[72 12(8 λ)] − − − − − − − − − − − = (5 λ)(λ2 +2λ 8)+3(6λ 12) 3(12λ 24) − − − − − = λ3 +3λ2 4 = (λ + 1)(λ 2)2. − − − − Eigenvalues are therefore λ = 1 and λ = 2. − • These examples indicate that characteristic equation 3.4 for a linear operator on an n- dimensional space is an nth degree polynomial equation in λ. Its solutions are the eigenvalues of the linear operator. We now turn our attention to finding eigenvectors once eigenvalues have been calculated. With λ known, equation 3.3 represents a system of homogeneous, linear equations in the components of the corresponding eigenvector v. Because the determinant of the system is zero, there is an infinity of solutions. In other words, eigenvectors are not unique; there is always an infinity of eigenvectors corresponding to every eigenvalue. Because the zero vector is never considered an eigenvector of a linear operator, the totality of eigenvectors corresponding to a single eigenvalue do not constitute a subspace of the vector space on which the operator operates. If we include the zero vector in the set of eigenvectors, the set becomes a subspace, called the eigenspace associated with the eigenvalue. However, realize that all nonzero vectors in the eigenspace are eigenvectors, but the zero vector is not an eigenvector. Example 3.5 Find eigenvectors corresponding to the eigenvalues λ = 1 and λ = 2 in Example 3.4. Assume that the operator associated with the matrix operate− s on 3. G SECTION 3.1 87 Solution Eigenvectors corresponding to λ = 1 satisfy − 6 3 3 v − − 1 0 = (A + I)v = 6 9 6 v2 . −12 12 9 v − − 3 The reduced row echelon form for the augmented matrix of this system is 1 0 1/4 0 − 01 1/2 0 . 00 0 0 When we convert to equations, v v v 3 =0, v + 3 =0. 1 − 4 2 2 v3/4 1 v3 Eigenvectors are therefore v = v3/2 = 2 . In other words, any multiple of 4 −v −4 3 v = 1, 2, 4 is an eigenvector corresponding to λ = 1; the eigenspace of λ = 1 is a line of eigenvectors.h − i To check our calculations, we could multip− ly, − 5 3 3 1 1 1 − − − A v = 6 8 6 2 = 2 = ( 1) 2 = λ v. −12 12 10 −4 4 − −4 − − − Eigenvectors corresponding to λ = 2 are given by 3 3 3 v − − 1 0 = (A 2I)v = 6 6 6 v2 .

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    15 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us