<<

Therefore x + cy is also a λ-eigenvector. Thus, the of λ-eigenvectors form a subspace of F n. q.e.d. One reason these eigenvalues and eigenspaces are important is that you can determine many of the Eigenvalues, eigenvectors, and properties of the transformation from them, and eigenspaces of linear operators that those properties are the most important prop- Math 130 Linear erties of the transformation. D Joyce, Fall 2015 These are invariants. Note that the Eigenvalues and eigenvectors. We’re looking eigenvalues, eigenvectors, and eigenspaces of a lin- at linear operators on a vector V , that is, ear transformation were defined in terms of the linear transformations x 7→ T (x) from the vector transformation, not in terms of a matrix that de- space V to itself. scribes the transformation relative to a particu- When V has finite n with a specified lar . That means that they are invariants of basis β, then T is described by a n×n matrix square matrices under . Recall that A = [T ]β. if A and B represent the transformation with re- We’re particularly interested in the study the ge- spect to two different bases, then A and B are con- ometry of these transformations in a way that we jugate matrices, that is, B = P −1AP where P is can’t when the transformation goes from one vec- the transition matrix between the two bases. The tor space to a different , namely, we’ll eigenvalues are , and they’ll be the same compare the original vector x to its T (x). for A and B. The corresponding eigenspaces will Some of these vectors will be sent to other vectors be isomorphic as subspaces of F n under the linear on the same , that is, a vector x will be sent to of conjugation by P . Thus we have the a multiple λx of itself. following theorem. Definition 1. For a given linear operator T : V → Theorem 3. The eigenvalues of a A −1 V , a nonzero vector x and a constant scalar λ are are the same as any conjugate matrix B = P AP called an eigenvector and its eigenvalue, respec- of A. Furthermore, each λ-eigenspace for A is iso- tively, when T (x) = λx. For a given eigenvalue morphic to the λ-eigenspace for B. In particular, λ, the set of all x such that T (x) = λx is called the of each λ-eigenspace are the same the λ-eigenspace. The set of all eigenvalues for a for A and B. transformation is called its spectrum. When 0 is an eigenvalue. It’s a special situa- When the operator T is described by a matrix tion when a transformation has 0 an an eigenvalue. A, then we’ll associate the eigenvectors, eigenval- That means Ax = 0 for some nontrivial vector x. ues, eigenspaces, and spectrum to A as well. As In general, a 0-eigenspaces is the solution space of n A directly describes a linear operator on F , we’ll the homogeneous Ax = 0, what we’ve n take its eigenspaces to be of F . been calling the null space of A, and its dimension Theorem 2. Each λ-eigenspace is a subspace of V . we’ve been calling the nullity of A. Since a square matrix is invertible if and only if it’s nullity is 0, we Proof. Suppose that x and y are λ-eigenvectors and can conclude the following theorem. is a scalar. Then Theorem 4. A square matrix is invertible if and T (x+cy) = T (x)+cT (y) = λx+cλy = λ(x+cy). only if 0 is not one of its eigenvalues. Put another

1 way, a square matrix is singular if and only if 0 is The , the main tool one of its eigenvalues. for finding eigenvalues. How do you find what values the eigenvalues λ can be? In the finite di- An example transformation that has 0 as an mensional case, it comes down to finding the roots eigenvalue is a , like (x, y, z) 7→ (x, y, 0) of a particular polynomial, called the characteristic that maps space to the xy-. For this projec- polynomial. tion, the 0-eigenspace is the z-axis. Suppose that λ is an eigenvalue of A. That means there is a nontrivial vector x such that Ax = λx. Equivalently, Ax−λx = 0, and we can rewrite that When 1 is an eigenvalue. This is another im- as (A − λI)x = 0, where I is the . portant situation. It means the transformation has But a homogeneous equation like (A−λI)x = 0 has a subspace of fixed points. That’s because vector x a nontrivial solution x if and only if the is in the 1-eigenspace if and only if Ax = x. of A − λI is 0. We’ve shown the following theorem. An example transformation that has 1 as an eigenvalue is a reflection, like (x, y, z) 7→ (x, y, −z) that reflects space across the xy-plane. Its 1- Theorem 5. A scalar λ is an eigenvalue of A if and eigenspace, that is, its subspace of fixed points, is 2 only if det(A − λI) = 0. In other words, λ is a root the xy-plane. We’ll look at reflections in R in de- of the polynomial det(A − λI), which we call the tail in a moment. characteristic polynomial or eigenpolynomial. The Another transformation with 1 as an eigenvalue equation det(A−λI) = 0 is called the characteristic is the shear transformation (x, y) 7→ (x + y, y). Its equation of A. 1-eigenspace is the x-axis.

Note that the characteristic polynomial has de- Eigenvalues of reflections in R2. We’ve looked gree n. That means that there are at most n - at reflections across some lines in the plane. There’s values. Since some eigenvalues may be repeated a general form for a reflection across the line of slope roots of the characteristic polynomial, there may tan θ, that is, across the line that makes an be fewer than n eigenvalues. of θ with the x-axis. Namely, the matrix transfor- mation x 7→ Ax, where Another reason there may be fewer than n val- ues is that the roots of the eigenvalue may not lie cos 2θ sin 2θ  in the field F . That won’t be a problem if F is A = , sin 2θ − cos 2θ the field of complex numbers C, since the Funda- mental Theorem of Algebra guarantees that roots describes a such a reflection. of lie in C. A reflection has fixed points, namely, the points We can use characteristic polynomials to give on the line being reflected across. Therefore, 1 is an alternate proof that conjugate matrices have an eigenvalue of a reflection, and the 1-eigenspace the same eigenvalues. Suppose that B = P −1AP . is the line of reflection. We’ll show that the characteristic polynomials of A Orthogonal to that line is a line passing through and B are the same, that is, the and its points are reflected across the origin, that is to say, they’re negated. Therefore, det(A − λI) = det(B − λI). −1 is an eigenvalue, and the orthogonal line is its eigenspace. Reflections have only these two eigen- values, ±1. That will imply that they have the same eigenval-

2 ues. 0. The matrix A − λ1I is  0 0 0 det(B − λI) = det(P −1AP − λI) −3 2 0 −1 −1   = det(P AP − P λIP ) 3 2 1 = det(P −1(A − λI)P ) which row reduces to −1 = det(P ) det(A − λI) det(P )  1  1 0 6 = det(A − λI) 1 0 1 4  0 0 0 How to find eigenvalues and eigenspaces. and from that we can read off the general solution Now we know the eigenvalues are the roots of the 1 1 characteristic polynomial. We’ll illustrate this with (x, y, z) = (− 6 z, − 4 z, z) an example. Here’s the process to find all the eigen- where z is arbitrary. That’s the one-dimensional values and their associated eigenspaces. 1-eigenspace (which consists of the fixed points of 1). Form the characteristic polynomial the transformation). Next, find the λ -eigenspace. The matrix A−λ I det(A − λI). 2 2 is −2 0 0  2). Find all the roots of it. Since it is an nth de- −3 0 0 gree polynomial, that can be hard to do by hand if n   3 2 −1 is very large. Its roots are the eigenvalues λ1, λ2,.... which row reduces to 3). For each eigenvalue λi, solve the matrix equa-   tion (A − λiI)x = 0 to find the λi-eigenspace. 1 0 0 1 0 1 − 2  Example 6. We’ll find the characteristic polyno- 0 0 0 mial, the eigenvalues and their associated eigenvec- tors for this matrix: and from that we can read off the general solution (x, y, z) = (0, 1 z, z)  1 0 0 2 A = −3 3 0 where z is arbitrary. That’s the one-dimensional 3 2 2 3-eigenspace. Finally, find the λ3-eigenspace. The matrix A − The characteristic polynomial is λ I is 3   −1 0 0 1 − λ 0 0 −3 1 0 |A − λI| = −3 3 − λ 0 3 2 0 3 2 2 − λ which row reduces to = (1 − λ)(3 − λ)(2 − λ). 1 0 0 Fortunately, this polynomial is already in factored 0 1 0 0 0 0 form, so we can read off the three eigenvalues: λ1 = 1, λ2 = 3, and λ3 = 2. (It doesn’t matter the order and from that we can read off the general solution you name them.) Thus, the spectrum of this matrix (x, y, z) = (0, 0, z) is the set {1, 2, 3}. Let’s find the λ1-eigenspace. We need to solve where z is arbitrary. That’s the one-dimensional Ax = λ1x. That’s the same as solving (A−λ1I)x = 2-eigenspace.

3 Math 130 Home Page at http://math.clarku.edu/~ma130/

4