<<

Eigenvalues and eigenvectors

Definition: For a vector space V over a field K and a linear map f : V → V , the eigenvalue of f is any λ ∈ K for which exists a vector u ∈ V \ 0 such that f (u) = λu. The eigenvector corresponding to an eigenvalue λ is any vector u such that f (u) = λu.

If V is of finite dimension n, then f can be represented n×n by the A = [f ]XX ∈ K w.r.t. some X of V . n This way we get eigenvalues λ ∈ K and eigenvectors x ∈ K of matrices — these shall satisfy Ax = λx.

The collection of all eigenvalues of a matrix is its spectrum. Examples — a linear map in the plane R2

The axis symmetry by the axis of the 2nd and 4th quadrant

x x 1 2  0 −1 [f ] = KK −1 0

T λ1 = 1 x1 = c · (−1, 1) T λ2 = −1 x2 = c · (1, 1) The rotation by the right angle

 0 1 [f ] = KK −1 0

No real eigenvalues nor eigenvectors exist. The projection onto the first coordinate x 1 1 0 [f ] = KK 0 0

x2 T λ1 = 0 x1 = c · (0, 1) T λ2 = 1 x2 = c · (1, 0) Scaling by the factor2

f (u) 2 0 [f ] = u KK 0 2

λ1 = 2 Every vector is an eigenvector.

A linear map given by a matrix

x1 f (u) 1 0 [f ]KK = u 1 1

T λ1 = 1 x1 = c · (0, 1) Eigenvectors and eigenvalues of a D

The equation

      d1,1 0 ... 0 x1 d1,1x1  . .   0 d .. .  x2 d2,2x2 Dx =  2,2    =   = λx  . . .   .   .   . .. ..   .   .   . 0      0 ... 0 dn,n xn dn,nxn

is solved by the following eigenvalues and eigenvectors:

T λ = d1,1 and x = e1 = (1, 0, 0,..., 0) , T λ = d2,2 and x = e2 = (0, 1, 0,..., 0) , . . T λ = dn,n and x = en = (0, 0,..., 0, 1) . Hence the eigenvalues of D are the elements on the diagonal, n and the eigenvectors form the canonical basis of the space K . Properties of eigenvalues and eigenvectors

Observation: Eigenvectors corresponding to the same eigenvalue form a subspace.

Proof: Consider an eigenvalue λ of a linear map f and the set U = {u ∈ V : f (u) = λu} For any u, v ∈ U we get: I f (au) = af (u) = aλu = λ(au), I f (u + v) = f (u) + f (v) = λu + λv = λ(u + v). Hence U is closed under addition and scalar multiples, i.e. a subspace of V . Definition: The geometric multiplicity of an eigenvalue is the dimension of the space of its eigenvectors. Properties of eigenvalues and eigenvectors

Theorem: Let f : V → V be a linear map and λ1, . . . , λk be distinct eigenvalues of f and u1,..., uk the corresponding nontrivial eigenvectors. Then u1,..., uk are linearly independent. Proof: Assume for a contradiction, that k is the smallest number for which exist λ1, . . . , λk and u1,..., uk contradicting the claim, k P i.e. there are a1,..., ak ∈ K \ 0 such that ai ui = 0. i=1 k ! k P P We express 0 in two ways: 0 = f (0) = f ai ui = λi ai ui , i=1 i=1 k k P P and also: 0 = λk 0 = λk ai ui = λk ai ui , i=1 i=1 k k k−1 P P P hence: 0 = 0 − 0 = λi ai ui − λk ai ui = (λi − λk )ai ui . i=1 i=1 i=1 As λi 6= λk we get (λi − λk )ai 6= 0. Already u1,..., uk−1 are linearly dependent — a contradiction with the minimality of k. Properties of eigenvalues and eigenvectors

Theorem: Let f : V → V be a linear map and λ1, . . . , λk be distinct eigenvalues of f and u1,..., uk the corresponding nontrivial eigenvectors. Then u1,..., uk are linearly independent.

Corollary: A matrix of order n may have at most n distinct eigenvalues. Characteristic polynomial

n×n Definition: The characteristic polynomial of a matrix A ∈ K is pA(t) = det(A − tIn)

n×n Theorem: A number λ ∈ K is an eigenvalue of a matrix A ∈ K if and only if λ is a root of its characteristic polynomial pA(t).

Proof: λ is an eigenvalue of A ⇔ n ⇔∃ x ∈ K \ 0 : Ax = λx n ⇔∃ x ∈ K \ 0 :(A − λIn)x = 0 ⇔ the matrix A − λIn is singular

⇔ det(A − λIn) = pA(λ) = 0 Eigenvalues — roots of the characteristic polynomial

Zero matrix:   00 ... 0 −t 0 ... 0

00 ... 0 0 −t ... 0 0 =   p (t) = = (−t)n n . . .. . 0n . . .. . . . . . . . . .

00 ... 0 00 ... −t

The matrix 0n has only single eigenvalue0 with multiplicity n.

A diagonal or a triangular matrix (also the identity matrix In):

  a1,1 ∗ ... ∗ a1,1 − t ∗ ... ∗

 0 a2,2 ... ∗  0 a2,2 − t ... ∗ A =   p (t) = =  . . .. .  A . . .. .  . . . .  . . . .

00 ... an,n 00 ... an,n − t

n Q = (ai,i − t) The eigenvalues of A are a1,1, a2,2,..., an,n. i=1 The matrix with ones:

11 ... 1 1 − t 1 ... 1 . . 11 ... 1 .. .   11 − t . 1n = . . . . p1n (t) = = . . .. . . .. .. . . . . . . 1

11 ... 1 1 ... 11 − t

−t 0 ... 0 t −t 0 ...... 0

...... 0 −t . . . 0 −t . .

= ...... = ...... = . . . 0 ......

0 ... 0 −tt 0 ... 0 −t 0

1 ...... 11 − t 1 ...... 1 n − t

= (−t)n−1(n − t)

The matrix 1n has the eigenvalue0 of multiplicity n − 1 and the eigenvalue n of multiplicity1. Computing eigenvalues and eigenvectors 120  Determine eigenvalues and A = 3 −13  eigenvectors of the matrix   1 −22

1 − t 20

Characteristic pA(t) = 3 −1 − t 3 = polynomial: 1 −22 − t = (1−t)(−1−t)(2−t)+6+6(1−t)−6(2−t) = −t3 +2t2 +t −2

The eigenvalues of A are the roots of pA(t), i.e.2,1 and −1.

The eigenvector x1 for λ1 = 2 is any solution of (A − λ1I3)x1 = 0 1 − 220  −120  1 −20  A − λ I = 3 −1 − 23 = 3 −33 ∼ 1 3     011 1 −22 − 2 1 −20 T The solution x1 is any scalar multiple of the vector (2, −1, 1) . T The eigenvalue λ2 = 1 yields the eigenvector x2 = (−1, 0, 1) , and T the eigenvalue λ3 = −1 yields the eigenvector x3 = (−1, 1, 1) . Coefficients of the characteristic polynomial n P i Observation: For pA(t) = det(A − tIn) = bi t it holds: i=0 n I bn = (−1) . . . only the product along the diagonal in n A − tIn may yield t , each its factor of t has coefficient −1. Corollary: When K is algebraically closed, one may factorize the characteristic polynomial into linear factors with roots/eigenvalues: r r r pA(t) = (λ1 − t) 1 (λ2 − t) 2 ... (λk − t) k with r1 + ··· + rk = n. The exponent ri is the algebraic multiplicity of the eigenvalue λi . k Q ri I b0 = det(A) = λi . . . substitute t = 0 into pA(t) i=1 n k n−1 P n−1 P I bn−1 = (−1) ai,i = (−1) λi + ··· + λi i=1 i=1 | {z } ri ... the term tn−1 could be obtained only from the product of t linear terms ai,i − t that are the diagonal of A − tIn by choosing n − 1 times the term −t and once each of ai,i . Analogously for the product of terms λi − t. Application — eigenvalues of rabbit populations Assume that the rabbits’ breeding is described by a simple law, e.g. that this year’s number of rabbits is the sum of these numbers of the past two years.

Let Ft denotes the number of rabbits in year t. We get the recurrent formula for Fibbonacci numbers Ft = Ft−1 + Ft−2. We may ask, how the ratio Ft develops — whether it has a limit, Ft−1 or whether it oscillates, or whether it becomes stable. The same in the language of matrices and vector spaces: 2 2 2 Consider the space R , then the linear map f : R → R given by ! ! ! F 11 F t = t−1 Ft−1 10 Ft−2 describes the same recurrent relation. E.g., if we start with a single rabbit, we get the sequence ! ! ! ! ! ! ! 1 1 2 3 5 8 13 −f → −f → −f → −f → −f → −f → −f →... 0 1 1 2 3 5 8 ! Ft Ft The stable ratio F , have vectors x = satisfying: t−1 Ft−1 ! 11 f (x) = x = λx 10 for some λ ∈ R. (Vectors x and λx have the same ratio.) ! 11 The matrix has two eigenvalues, namely: 10

√ ! √ 1+ 5 . . . the vector’s elements grow λ = 1+ 5 x = 2 1 2 1 with every iteration

√ √ 1− 5 ! . . . here the sign changes with λ = 1− 5 x = 2 every iteration, and the limit is 2 2 1 the zero vector