Chapter E Eigenvalues
Total Page:16
File Type:pdf, Size:1020Kb
Chapter E Eigenvalues When we have a square matrix of size n, A, and we multiply it by a vector x from Cn to form the matrix-vector product (Definition MVP), the result is another vector in Cn. So we can adopt a functional view of this computation | the act of multiplying by a square matrix is a function that converts one vector (x) into another one (Ax) of the same size. For some vectors, this seemingly complicated computation is really no more complicated than scalar multiplication. The vectors vary according to the choice of A, so the question is to determine, for an individual choice of A, if there are any such vectors, and if so, which ones. It happens in a variety of situations that these vectors (and the scalars that go along with them) are of special interest. We will be solving polynomial equations in this chapter, which raises the specter of complex numbers as roots. This distinct possibility is our main reason for entertaining the complex numbers throughout the course. You might be moved to revisit Section CNO and Section O. Section EE Eigenvalues and Eigenvectors In this section, we will define the eigenvalues and eigenvectors of a matrix, and see how to compute them. More theoretical properties will be taken up in the next section. Subsection EEM Eigenvalues and Eigenvectors of a Matrix We start with the principal definition for this chapter. Definition EEM Eigenvalues and Eigenvectors of a Matrix Suppose that A is a square matrix of size n, x = 0 is a vector in Cn, and λ is a 6 scalar in C. Then we say x is an eigenvector of A with eigenvalue λ if Ax = λx Before going any further, perhaps we should convince you that such things ever happen at all. Understand the next example, but do not concern yourself with where the pieces come from. We will have methods soon enough to be able to discover these eigenvectors ourselves. Example SEE Some eigenvalues and eigenvectors Consider the matrix 204 98 26 10 280 134− 36− 14 A = 2−716− 348 90 363 472 232− 60− 28 6 7 4− − 5 283 284 Robert Beezer EE x and the vectors 1 3 3 1 1 −4 −7 1 x = y = z = w = 2−2 3 2 103 2 0 3 2−4 3 5 −4 8 0 6 7 6 7 6 7 6 7 Then 4 5 4 5 4 5 4 5 204 98 26 10 1 4 1 280 134− 36− 14 1 4 1 Ax = = = 4 = 4x 2−716− 348 90 363 2−2 3 2−8 3 2−2 3 472 232− 60− 28 5 20 5 6− − 7 6 7 6 7 6 7 so x is an eigenvector4 of A with eigenvalue5 4λ =5 4. 4 5 4 5 Also, 204 98 26 10 3 0 3 280 134− 36− 14 −4 0 −4 Ay = = = 0 = 0y 2−716− 348 90 363 2 103 203 2 103 472 232− 60− 28 −4 0 −4 6− − 7 6 7 6 7 6 7 so y is an eigenvector4 of A with eigenvalue5 4λ = 0.5 4 5 4 5 Also, 204 98 26 10 3 6 3 280 134− 36− 14 −7 −14 −7 Az = = = 2 = 2z 2−716− 348 90 363 2 0 3 2 0 3 2 0 3 472 232− 60− 28 8 16 8 6− − 7 6 7 6 7 6 7 so z is an eigenvector4 of A with eigenvalue5 4λ = 2.5 4 5 4 5 Also, 204 98 26 10 1 2 1 280 134− 36− 14 1 2 1 Aw = = = 2 = 2w 2−716− 348 90 363 2−4 3 2−8 3 2−4 3 472 232− 60− 28 0 0 0 6− − 7 6 7 6 7 6 7 so w is an eigenvector4 of A with eigenvalue5 4λ =5 2. 4 5 4 5 So we have demonstrated four eigenvectors of A. Are there more? Yes, any nonzero scalar multiple of an eigenvector is again an eigenvector. In this example, set u = 30x. Then Au = A(30x) = 30Ax Theorem MMSMM = 30(4x) x an eigenvector of A = 4(30x) Property SMAM = 4u so that u is also an eigenvector of A for the same eigenvalue, λ = 4. The vectors z and w are both eigenvectors of A for the same eigenvalue λ = 2, yet this is not as simple as the two vectors just being scalar multiples of each other (they are not). Look what happens when we add them together, to form v = z + w, and multiply by A, Av = A(z + w) = Az + Aw Theorem MMDAA = 2z + 2w z, w eigenvectors of A = 2(z + w) Property DVAC = 2v so that v is also an eigenvector of A for the eigenvalue λ = 2. So it would appear that the set of eigenvectors that are associated with a fixed eigenvalue is closed under the vector space operations of Cn. Hmmm. The vector y is an eigenvector of A for the eigenvalue λ = 0, so we can use Theorem ZSSM to write Ay = 0y = 0. But this also means that y (A). There would appear to be a connection here also. 2 N 4 EE AFirstCourseinLinearAlgebra 285 x Example SEE hints at a number of intriguing properties, and there are many more. We will explore the general properties of eigenvalues and eigenvectors in Section PEE, but in this section we will concern ourselves with the question of actually computing eigenvalues and eigenvectors. First we need a bit of background material on polynomials and matrices. Subsection PM Polynomials and Matrices A polynomial is a combination of powers, multiplication by scalar coefficients, and addition (with subtraction just being the inverse of addition). We never have occasion to divide when computing the value of a polynomial. So it is with matrices. We can add and subtract matrices, we can multiply matrices by scalars, and we can form powers of square matrices by repeated applications of matrix multiplication. We do not normally divide matrices (though sometimes we can multiply by an inverse). If a matrix is square, all the operations constituting a polynomial will preserve the size of the matrix. So it is natural to consider evaluating a polynomial with a matrix, effectively replacing the variable of the polynomial by a matrix. We will demonstrate with an example. ExamplePM Polynomial of a matrix Let 1 3 2 p(x) = 14 + 19x 3x2 7x3 + x4 D = −1 0 2 − − " 3 1− 1 # − and we will compute p(D). First, the necessary powers of D. Notice that D0 is defined to be the multiplicative identity, I3, as will be the case in general. 1 0 0 0 D = I3 = 0 1 0 "0 0 1# 1 3 2 D1 = D = −1 0 2 " 3 1− 1 # − 1 3 2 1 3 2 2 1 6 D2 = DD1 = −1 0 2 −1 0 2 = −5− 1− 0 " 3 1− 1 #" 3 1− 1 # " 1 8 7# − − − − 1 3 2 2 1 6 19 12 8 D3 = DD2 = −1 0 2 −5− 1− 0 = 4− 15− 8 " 3 1− 1 #" 1 8 7# "−12 4 11 # − − − − 1 3 2 19 12 8 7 49 54 D4 = DD3 = −1 0 2 4− 15− 8 = −5 4 30 " 3 1− 1 #"−12 4 11 # " −49− 47− 43 # − − − Then p(D) = 14 + 19D 3D2 7D3 + D4 − − 1 0 0 1 3 2 2 1 6 = 14 0 1 0 + 19 −1 0 2 3 −5− 1− 0 "0 0 1# " 3 1− 1 # − " 1 8 7# − − − 19 12 8 7 49 54 7 4− 15− 8 + −5 4 30 − "−12 4 11 # " −49− 47− 43 # − − 139 193 166 = −27 98 124 " 193− 118− 20 # − 286 Robert Beezer EE x Notice that p(x) factors as p(x) = 14 + 19x 3x2 7x3 + x4 = (x 2)(x 7)(x + 1)2 − − − − Because D commutes with itself (DD = DD), we can use distributivity of matrix multiplication across matrix addition (Theorem MMDAA) without being careful with any of the matrix products, and just as easily evaluate p(D) using the factored form of p(x), 2 3 4 2 p(D) = 14 + 19D 3D 7D + D = (D 2I3)(D 7I3)(D + I3) − − − − 3 3 2 8 3 2 0 3 2 2 = −1 2 2 −1 7 2 1 1 2 " 3− 1 −1#" 3− 1 −6#" 3 1− 2 # − − − − − 139 193 166 = −27 98 124 " 193− 118− 20 # − This example is not meant to be too profound. It is meant to show you that it is natural to evaluate a polynomial with a matrix, and that the factored form of the polynomial is as good as (or maybe better than) the expanded form. And do not forget that constant terms in polynomials are really multiples of the identity matrix when we are evaluating the polynomial with a matrix. 4 Subsection EEE Existence of Eigenvalues and Eigenvectors Before we embark on computing eigenvalues and eigenvectors, we will prove that every matrix has at least one eigenvalue (and an eigenvector to go with it). Later, in Theorem MNEM, we will determine the maximum number of eigenvalues a matrix may have. The determinant (Definition DM) will be a powerful tool in Subsection EE.CEE when it comes time to compute eigenvalues. However, it is possible, with some more advanced machinery, to compute eigenvalues without ever making use of the determinant. Sheldon Axler does just that in his book, Linear Algebra Done Right. Here and now, we give Axler's \determinant-free" proof that every matrix has an eigenvalue.