3.2 Eigenvectors and Eigenvalues

3.2 Eigenvectors and Eigenvalues

40 Chapter 3 Matrices 3.1 Matrices Definition 3.1 (Matrix) A matrix A is a rectangular array of m n real numbers × a written as { ij} a a a 11 12 ··· 1n a a a 21 22 ··· 2n A = . . a a a m1 m2 mn ··· The array has m rows and n columns. If A and B are both m n matrices, then C = A + B is an m n matrix of real numbers c = a + b .× If A is an m n matrix and α 1, then× αA is an m n ij ij ij × ∈ℜ × matrix with elements αaij. Also, αA = Aα, and α(A + B)= αA + αB. A matrix A can be used to define a function A : n m. ℜ →ℜ Definition 3.2 (Linear function associated with a matrix) Given a vector x n with coordinates (α ,...,α ) relative to a fixed basis, define u = Ax to be∈ ℜ 1 n the vector in m with coordinates given for i =1,...,m, ℜ n βi = aijαj =(Ai, x) (3.1) jX=1 where Ai is the vector consisting of the ith row of A. The vector u has coordinates (β1,...,βm). 41 42 CHAPTER 3. MATRICES Theorem 3.1 For all x, y n, scalars α and β, and any two n m matrices ∈ ℜ × A, B, the function (3.1) has the following two properties: A(αx + βy) = αAx + βAy (3.2) (αA + βB)(x) = αAx + βBy In light of (3.2) we will call A a linear function, and if m = n, A is a linear transformation. Proof: Straightforward application of the definition. Definition 3.3 A square matrix has the same number of rows and columns. It transforms from n n, but is not necessarily onto. ℜ →ℜ Connection between matrices and linear transformations. In light of (3.1), every square matrix corresponds to a linear transformation from n n. This justifies using the same symbol for both a linear transformation andℜ for→ ℜits corre- sponding matrix. The matrix representation of a linear transformation depends on the basis. Example. In 3, consider the linear transformation defined as follows. For the ℜ canonical basis e , e , e , { 1 2 3} 1 0 1 Ae1 = 2 Ae2 = 1 Ae3 = 0 3 1 1 − Relative to this basis, the matrix of A is 10 1 A = 21 0 3 1 1 − If we change to a different basis, say 1 1 1 x = 1 , x = 1 , x = 1 1 2 − 3 1 0 2 − then the matrix A changes. For this example, since x1 = e1 +e2 +e3, x2 = e1 e2 and x = e + e 2e , we can compute − 3 1 2 − 3 2 Ax1 = A(e1 + e2 + e3)= Ae1 + Ae2 + Ae3 = 3 = a1 3 3.1. MATRICES 43 and this is the first column of A relative to this basis. The full matrix is 2 1 1 − 31 3 32 2 While the matrix representation of a linear transformation depends on the basis chosen, the two special linear transformations 0 and I are always identified with the null matrix and the identity matrix, respectively. Definition 3.4 (Matrix Multiplication) Given two matrices A : m n and B : n p, the matrix product C = AB (in this order) is an m p matrix× with typical elements× × n cij = aikbkj kX=1 The matrix product is defined only when the number of columns of A equals the number of rows of B. The matrix product is connected to (3.1) using Theorem 3.2 Suppose A : m n and B : n p, and C = AB. Then C defines a linear function with domain ×p and range ×m. For any x p, we have ℜ ℜ ∈ℜ Cx =(AB)x = A(Bx) This theorem shows that matrix multiplication is the same as function composi- tion, so Cx is the same as applying the function A to the vector Bx. Definition 3.5 (Transpose of a matrix) The transpose A′ of a n m matrix A = (a ) is an m n array whose entries are given by (a ). × ij × ji Definition 3.6 (Rank of a matrix) An m n matrix A is a transformation from n m. The rank ρ(A) of A is the dimension× of the vector subspace u u = ℜ → ℜ { | Ax, x n m. ∈ℜ }⊂ℜ Theorem 3.3 The rank of A is equal to the number of linearly independent columns of A. Theorem 3.4 ρ(A)= ρ(A′), or the number of linearly independent columns of A is the same of the number of linearly independent rows. 44 CHAPTER 3. MATRICES Definition 3.7 The n n matrix A is nonsingular if ρ(A)= n. If ρ(A) < n, then × A is singular. Definition 3.8 (Inverse) The inverse of a square matrix A is a square matrix of the same size A−1 such that AA−1 = A−1A = I. Theorem 3.5 A−1 exists and is unique if and only if A is nonsingular. Definition 3.9 (Trace) trace(A)= tr(A)= aii. P From this definition, it is easy to show the following: tr(A + B) = tr(A)+ tr(B), trace is additive tr(ABC) = tr(BCA)= tr(CAB), trace is cyclic and, if B is nonsingular, tr(A) = tr(ABB−1) = tr(BAB−1) = tr(B−1AB) Definition 3.10 (Symmetric) A is symmetric if aij = aji, for all i, j. Definition 3.11 (Diagonal) A is diagonal if a =0, i = j. ij 6 Definition 3.12 (Determinant) The determinant det(A) is given by f(i1,...,im) det(A) = ( 1) a 1 a 2 ,...,a m − 1i 2i mi X f(i1,...,im) = ( 1) a 1 a 2 ,...,a m − i 1 i 2 i m X where the sum is over all permutations (i1,...,im) of (1,...,m), and f(i1,...,im) is the number of transpositions needed to change (1, 2,...,n) into (i1,...,im). The determinant of a diagonal matrix is the product of its diagonal elements. The determinant is a polynomial of degree n for an n n matrix. × Theorem 3.6 If A is singular, then det(A)=0. 3.2. EIGENVECTORS AND EIGENVALUES 45 3.2 Eigenvectors and Eigenvalues Definition 3.13 (Eigenvectors and eigenvalues) An eigenvector of a square ma- trix A is any nonzero vector x such that Ax = λx,λ . λ is an eigenvalue of A. ∈ ℜ From the definition, if x is an eigenvector of A, then Ax = λx = λIx and Ax λIx = 0 (3.3) − Theorem 3.7 λ is an eigenvalue of A if and only if (3.3) is satisfied. Equivalently, λ is an eigenvalue if it is a solution to det(A λI) = 0 − This last equation provides a prescriptions for finding eigenvalues, as a solution to det(A λI)=0. The determinant of an n n matrix is a polynomial of degree n, showing− that the number of eigenvalues must× be n. The eigenvalues need not be unique or nonzero. In addition, for any nonzero scalar c, A(cx)= cAx =(cλ)x, so that cx is also an eigenvector with associated eigenvalue cλ. To resolve this particular source of indeterminacy, we will always require that all eigenvectors will be normalized to have unit length, x = 1. However, some indeterminacy in the eigenvectors still remains, as shownk k in the next theorem. Theorem 3.8 If x1 and x2 are eigenvectors with the same eigenvalue, then any non-zero linear combination of x1 and x2 is also an eigenvector with the same eigenvalue. Proof. If Axi = λxi for i = 1, 2, then A(α1x1 + α2x2) = α1Ax1 + α2Ax2 = α1λx1 + α2λx2 = λ(α1x1 + α2x2) as required. According to Theorem 3.8, the set of vectors corresponding to the same eigen- value form a vector subspace. The dimension of this subspace can be as large as the multiplicity of the eigenvalue λ, but the dimension of this subspace can be less. For example, the matrix 1 2 3 A = 0 1 0 0 2 1 46 CHAPTER 3. MATRICES has det(A λI)=(1 λ)3, so it has one eigenvalue equal to one with multiplicity − − ′ three. The equations Ax = 1x have only solutions of the form x = (a, 0, 0) for any a, and so they form a vector subspace of dimension one, less than the multiplicity of λ. We can always find an orthonormal basis for this subspace, so the eigenvectors corresponding to the fixed eigenvalue can be taken to be orthogonal. Eigenvalues can be real or complex. But if A is symmetric, then the eigenval- ues must be real. Theorem 3.9 The eigenvalues of a real, symmetric matrix are real. Proof. In the proof, we will allow both the eigenvalue and the eigenvector to be complex. Suppose we have eigenvector x + iy with eigenvalue λ1 + λ2i, so A(x + iy)=(λ1 + λ2i)(x + iy) and thus Ax + Ayi =(λ x λ y)+ i(λ y + λ x) 1 − 2 1 2 Evaluating real and imaginary parts, we get: Ax = λ x λ y (3.4) 1 − 2 Ay = λ1y + λ2x (3.5) Multiply (3.4) on the left by y′ and (3.5) on the left by x′. Since x′Ay = y′A′x, we equate the right sides of these modified equations to get λ y′x λ y′y = λ x′y + λ x′x 1 − 2 1 2 and ′ ′ λ2(x x + y y)=0 This last equation holds in general only if λ2 = 0 and so the eigenvalue must be real, from which it follows that the eigenvector is real as well.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    20 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us