Lecture 13: Complex Eigenvalues & Factorization Consider the 0 1 A = . (1) 1 ¡0 · ¸ To …nd eigenvalues, we write λ 1 A λI = ¡ ¡ , ¡ 1 λ · ¡ ¸ and calculate its

det (A λI) = λ2 + 1 = 0. ¡ We see that A has only complex eigenvalues

λ= p 1 = i. § ¡ § Therefore, it is impossible to diagonalize the . In general, if a matrix has complex eigenvalues, it is not diagonalizable. In this lecture, we shall study matrices with complex eigenvalues. Since eigenvalues are roots of characteristic polynomials with real coe¢cients, complex eigenvalues always appear in pairs: If

λ0 = a + bi is a complex eigenvalue, so is its conjugate

λ¹0 = a bi. ¡ For any complex eigenvalue, we can proceed to …nd its (complex) eigenvectors in the same way as we did for real eigenvalues. Example 13.1. For the matrix A in (1) above, …nd eigenvectors. ¹ Solution. We already know its eigenvalues: λ0 = i, λ0 = i. For λ0 = i, we solve (A iI) ~x = ~0 as follows: performing (complex) row operations to¡get (noting that i2 = 1) ¡ ¡

i 1 iR1+R2 R2 i 1 A iI = ¡ ¡ ¡ ! ¡ ¡ ¡ 1 i ! 0 0 · ¡ ¸ · ¸ = ix1 x2 = 0 = x2 = ix1 ) ¡ ¡ ) ¡ x1 1 1 0 take x1 = 1, ~u = = = i x2 i 0 ¡ 1 · ¸ ·¡ ¸ · ¸ · ¸ is an eigenvalue for λ= i. For λ= i, we proceed similarly as follows: ¡

i 1 iR1+R2 R2 i 1 A + iI = ¡ ! ¡ = ix1 x2 = 0 1 i ! 0 0 ) ¡ · ¸ · ¸ 1 1 1 0 take x = 1, ~v = = + i 1 i 0 1 · ¸ · ¸ · ¸ is an eigenvalue for λ= i. We note that ¡

λ¹0 = i is the conjugate of λ0 = i ¡ ~v = ~u (the conjugate vector of eigenvector ~u) is an eigenvector for λ¹0.

This observation can be justi…ed as a general rule as follows.

by taking conjugate on equation (A + λ I) ~u = 0 = (A + λ I) ~u = 0 0 ) 0 = (A + λ I) ~u = 0 = A¹ (λ I) ~u = 0 = A λ¹ I ~u = 0. ) 0 ) ¡ 0 ) ¡ 0 ¡ ¢ ³ ´ ¡ ¢ ¡ ¢ Remark. In general, if ~u is an eigenvector associated with λ, then ~u, the conjugate of ~u, is an eigenvector associated with λ¹, the conjugate of λ. Therefore, for each pair, λ, λ¹ , of eigenvalue, we only need to compute eigenvectors for λ. The eigenvectors for λ¹ can be obtained easily by taking conjugates. ¡ ¢ Though A is not diagonalizable in the classic sense, we can still simplify it by introducing a term called "block-" matrix. Example 13.2. For the matrix A in (1) above that has complex eigenvalues, we proceed to choose P and D as follows: pick one complex eigenvalue and its eigenvector

1 1 0 λ0 = i, ~u = = i i 0 ¡ 1 ·¡ ¸ · ¸ · ¸ separate their real part and imaginary part (Re (a + bi) = a, Im (a + bi) = b) :

1 0 Re (λ ) = 0, Im (λ ) = 1, Re (~u) = , Im (~u) = . 0 0 0 1 · ¸ ·¡ ¸ We then assemble D and P as Re (λ) Im (λ) 0 1 D = = , Im (λ) Re (λ) 1 0 ·¡ ¸ ·¡ ¸ 1 0 P = [Re (~u) , Im (~u)] = . 0 1 · ¡ ¸ Note that D is not really a . We can verify AP = P D :

0 1 1 0 0 1 AP = = 1 ¡0 0 1 1 0 · ¸ · ¡ ¸ · ¸ 1 0 0 1 0 1 P D = = . 0 1 1 0 1 0 · ¡ ¸ ·¡ ¸ · ¸

2 In general, if a matrix A has complex eigenvalues, it may be similar to a block-diagonal matrix B, i.e., there exists an P such that AP = P B, where B has the form C1 0 ... 0 0 C ... 0 B = 2 , P = [P , P , ..., P ] (2) 2 ...... 3 1 2 r 6 0 0 ... Cr7 6 7 C is either a real eigenvalue4λ ( 1 1 matr5ix), or C is a 2 2 matrix (called a block), k k £ k £ corresponding to a complex eigenvalue λk = Re (λk) + i Im (λk) , in the form

Re (λ ) Im (λ ) C = k k , (3) k Im (λ ) Re (λ ) ·¡ k k ¸ and P is assembled accordingly: if Ck is an real eigenvalue, the corresponding column Pk in P is an eigenvector (real); if Ck is a 2 2 block as in (3) associated with a complex eigenvalue λk, £ then Pk consists of two columns: Pk = [Re (~uk) , Im (~uk)] where ~uk is a complex eigenvector for λk (note that both Re (~uk) and Im (~uk) are real vectors). Example 13.3. Some examples of block diagonal matrices (2): from v.s. form.

Example 1 1. 3 3 matrix with one diagonal entry and one 2 2 block: £ £ 2 0 2 0 0 2 3 = 0 2 3 , (λ = 2, λ = 2 + 3i, λ = 2 3i) 0 1 2 3 2 3 2 3 20 3 23 ¡ ·¡ ¸ ¡ 4 5 4 5 2. 4 4 matrix with two diagonal entries and one 2 2 block: £ £ 1 0 0 1 0 0 0 ¡0 2 0 ¡0 2 0 0 2 3 = 2 3 , (λ1 = 1, λ2 = 2, λ3 = 4 + 2i, λ4 = 4 2 0 0 4 2 ¡ ¡ 0 0 ¡ ¡ 6 2 4 7 6 0 0 2 47 6 ·¡ ¡ ¸7 6 ¡ ¡ 7 44 2i) 5 4 5 ¡ ¡ 3. 4 4 matrix with two 2 2 blocks: £4 2 £ 4 2 0 0 0 ¡2 4 ¡2 4 0 0 = , (λ = 4+2i, λ = 4 2i, λ = 2+3i, 2·¡ ¡ ¸ 2 3 3 2¡0 ¡0 2 33 1 2 3 0 ¡ ¡ ¡ 6 3 2 7 6 0 0 3 27 6 ·¡ ¸7 6 ¡ 7 λ4 =4 2 3i). 5 4 5 ¡ 1 The process of …nding a block diagonal matrix B in the form (2) and P such as P ¡ AP = B is called factorization. Note that if all eigenvalues are real, factorization = diagonalization. Factorization process: Step #1. Find all eigenvalues.

3 Step #2. For real eigenvalues, …nd a in each eigenspace Null (A λ). For each pair of complex eigenvalues λ and λ¹, …nd a (complex) basis in Null (A λ¡) . Then, write complex eigenvectors in the basis in the form ~u = Re (~u) + i Im (~u) . ¡ Note that, the total number of such vectors must be equal to the dimension. Otherwise, it is not factorizable. Step #3. Assemble P and B as in (2): if C1 is a real eigenvalue, the corresponding column P1 is one of its eigenvectors in the basis; if C1 is a 2 2 block matrix as in (3), the £ corresponding P1 consists of two columns (Re (~u) , i Im (~u)) . Remark. Suppose that A is a upper (or lower) block-triangle matrix in the form

B1 ... 0 B¤ ... ¤ A = 2 , 2 ...... ¤.. 3 6 0 0 ... Br7 6 7 4 5 where each Bi is a (with same or di¤erent sizes). Then det (A) = det (B ) det (B ) det (B ) . 1 2 ¢ ¢ ¢ r det (A λ) = det (B1 λ) det (B2 λ) det (Br λ) . ¡ ¡ ¡ ¢ ¢ ¢ ¡ Therefore, eigenvalues of A = collection of all eigenvalues of B1, ..., Br. Example 13.4. Factorize 2 9 0 2 1 2 1 0 A = . 2¡0 0 3 0 3 6 0 0 1 17 6 ¡ 7 4 5 Solution. Step 1. Find eigenvalues. We notice that 2 λ 9 0 2 2 λ 9 ¡ ¡ 1 2 λ 1 0 1 2 λ ¤ A λI = 2 ¡ ¡ 3 = 2· ¡ ¡ ¸ 3 ¡ 0 0 3 λ 0 3 λ 0 ¡ 0 ¡ 6 0 0 1 1 λ7 6 1 1 λ 7 6 ¡ ¡ 7 6 · ¡ ¡ ¸7 4 5 4 5 is an upper block-triangle. Thus, 2 λ 9 3 λ 0 det (A) = det ¡ det ¡ 1 2 λ ¢ 1 1 λ µ· ¡ ¡ ¸¶ · ¡ ¡ ¸ = [(2 λ) (2 λ) + 9] [(3 λ) ( 1 λ)] = 0. ¡ ¡ ¡ ¡ ¡ Hence [(2 λ) (2 λ) + 9] = 0 = λ2 4λ+ 13 = 0 = λ= 2 3i ¡ ¡ ) ¡ ) § or (3 λ) ( 1 λ) = 0 = λ= 3, λ= 1. ¡ ¡ ¡ ) ¡

4 There are two real eigenvalues λ1 = 3, λ2 = 1, and one pair of complex eigenvalue µ= 2+3i, and 2 3i, with real part and imaginary pa¡rt ¡ Re (µ) = 2, Im (µ) = 3.

Step 2. We next …nd eigenvectors. For λ1 = 3, we carry out row operation:

2 3 9 0 2 ¡1 2 3 1 0 A 3I = 2 ¡ ¡ 3 ¡ 0 0 3 3 0 ¡ 6 0 0 1 1 37 6 ¡ ¡ 7 4 5 19 1 9 0 2 R2 R1 R2 1 9 0 2 1 0 0 ¡ ¡ ! ¡ ¡ 5 1 1 1 0 R3 R4 0 10 1 2 2 1 3 = 2¡ ¡ 3 ! 2 ¡ ¡ 3 0 1 0 . 0 0 0 0 ! 0 0 1 4 ! 6 ¡5 7 ¡ 60 0 1 4 7 6 0 0 1 47 6 0 0 0 0 7 6 ¡ 7 6 ¡ 7 6 7 60 0 0 0 7 4 5 4 5 6 7 4 5 System became 19 x x = 0 1 ¡ 5 4 1 x2 x4 = 0 ¡ 5 x3 4x4 = 0. ¡

The eigenvalue is, by taking x4 = 5,

19 1 ~v = , λ = 3. 1 2203 1 6 5 7 6 7 4 5 For λ2 = 1, we proceed as follows: ¡ 3 9 0 2 3 9 0 2 0 18 0 2

1 3 1 0 1 3 0 0 3R2+R1 R1 1 3 0 0 A + I = 2¡ 3 2¡ 3 ! 2¡ 3 0 0 4 0 ! 0 0 1 0 ! 0 0 1 0 6 0 0 1 07 6 0 0 0 07 6 0 0 0 07 6 7 6 7 6 7 4 5 4 5 4 5 R1/18 R1 0 1 0 1/9 1 0 0 1/3 ! ( 3) R1 + R2 R2 1 0 0 1/3 0 1 0 1/9 ¡ ! 2¡ ¡ 3 2 3 . ! 0 0 1 0 ! 0 0 1 0 6 0 0 0 0 7 60 0 0 0 7 6 7 6 7 4 5 4 5

5 We obtain that 1 x = x 1 ¡3 4 1 x2 = x4 ¡9 x3 = 0.

By taking x4 = 9, ¡ 3 1 ~v2 = 2 3 , λ2 = 1. 0 ¡ 6 97 6¡ 7 For µ = 2 + 3i, we proceed analogously:4 5

2 (2 + 3i) 9 0 2 ¡ 1 2 (2 + 3i) 1 0 A µI = 2 ¡ ¡ 3 ¡ 0 0 3 (2 + 3i) 0 ¡ 6 0 0 1 1 (2 + 3i)7 6 ¡ ¡ 7 4 5 R1 3iR2 R1 ¡ ! 3i 9 0 2 R4 R3/ (1 3i) R4 0 0 0 2 ¡ ¡ ¡ ! 1 3i 1 0 R2 R3/ (1 3i) R2 1 3i 0 0 = 2 ¡ ¡ 3 ¡ ¡ ! 2¡ ¡ 3 0 0 1 3i 0 ! 0 0 1 3i 0 ¡ ¡ 6 0 0 1 3 3i7 6 0 0 0 3 3i7 6 ¡ ¡ 7 6 ¡ ¡ 7 4 5 4 5 R3/ (1 3i) R3 ¡ ¡ R1 R4 ( 3 3i) R1 0 0 0 0 ¡ ¡ ¡ ! R4/ ( 3 3i) R4 1 3i 0 0 ¡ ¡ ¡ 2¡ ¡ 3 . ! 0 0 1 0 6 0 0 0 17 6 7 4 5 System (A µI) x = 0 becomes ¡ x 3ix = 0 ¡ 1 ¡ 2 x3 = 0

x4 = 0

We obtain the complex eigenvector, by taking x2 = 1,

3i 0 3 ¡1 1 ¡0 ~u = = + i , µ= 2 + 3i. 2 0 3 203 2 0 3 6 0 7 607 6 0 7 6 7 6 7 6 7 4 5 4 5 4 5 6 0 1 Re (µ) = 2, Re (~u) = 203 607 6 7 4 35 ¡0 Im (µ) = 3, Im (~u) = . 2 0 3 6 0 7 6 7 Step #3. We …nally assemble P and B as follows:4 5 3 0 0 0 19 3 0 3 0 1 0 0 1 1 1 ¡0 B = , P = . 20 ¡0 2 33 220 0 0 0 3 60 0 3 27 6 5 9 0 0 7 6 ¡ 7 6 ¡ 7 We may verify AP = P B 4by a direct com5putation:4 5 2 9 0 2 19 3 0 3 57 3 9 6 1 2 1 0 1 1 1 ¡0 3 ¡1 2 ¡3 AP = = 2¡0 0 3 0 3 220 0 0 0 3 260 ¡0 0 0 3 6 0 0 1 17 6 5 9 0 0 7 615 9 0 0 7 6 ¡ 7 6 ¡ 7 6 7 419 3 0 35 4 3 0 0 05 457 3 9 65 1 1 1 ¡0 0 1 0 0 3 ¡1 2 ¡3 P B = = . 220 0 0 0 3 20 ¡0 2 33 260 ¡0 0 0 3 6 5 9 0 0 7 60 0 3 27 615 9 0 0 7 6 ¡ 7 6 ¡ 7 6 7 We conclude our4discussion by sta5ti4ng (without pro5of) t4he following gene5ral diagonaliza- tion theorem. Theorem (Jordan ) Let C be a n n complex matrix (i.e., entries of C could be either real numbers or complex numbers). T£hen there are a complex invertible matrix P and a complex block diagonal matrix J such that 1 P ¡ CP = J. Furthermore, the block matrix J has the following form J 0 0 1 ¢ ¢ ¢ 0 J2 0 J = 2 . . ¢.¢ ¢ . 3 , . . .. . 6 7 6 0 0 Jp7 6 ¢ ¢ ¢ 7 where for each i, i = 1, ..., p, Ji is eithe4r a diagonal mat5rix of dimension pi pi £ ci 0 0 ¢ ¢ ¢ 0 ci 0 Ji = 2 . . ¢.¢ ¢ . 3 , ci is an eigenvalue (real or complex) . . .. . 6 7 60 0 ci7 6 ¢ ¢ ¢ 7pi pi 4 5 £ 7 or ci 1 0 0 0 ¢ ¢ ¢ 0 ci 1 0 0 ¢ ¢ ¢ 2 .. 3 0 0 ci . 0 0 Ji = 6 ...... 7 , ci is an eigenvalue (real or complex). 6 ...... 7 6 7 60 0 0 c 17 6 ¢ ¢ ¢ i 7 60 0 0 0 ci7 6 7pi pi 4 ¢ ¢ ¢ 5 £ We call such matrix J Jordan Canonical Form. The block

ci 1 0 0 0 ¢ ¢ ¢ 0 ci 1 0 0 ¢ ¢ ¢ 2 .. 3 0 0 ci . 0 0 Ji = 6 ...... 7 6 ...... 7 6 7 60 0 0 c 17 6 ¢ ¢ ¢ i 7 60 0 0 0 ci7 6 7pi pi 4 ¢ ¢ ¢ 5 £ occurs when ci is an eigenvalue (real or complex) with multiplicity of at least pi.

Homework #13 ² 1. Factorize matrices. 1 2 1 (a) A == 1 ¡3 ¡1 20 0 2 3 4 1 5 0 5 0 2 3 0 0 (b) A == 2¡0 0 4 33 6 0 0 3 47 6 ¡ 7 4 5 2. Let A be a n n whose entries are all real numbers. £

(a) Show that for any real or complex vector Xn 1, the quantity £ q = X¹ T AX is a .

(Hint: show q¹ = q = qT .) (b) Show that if λ is an eigenvalue of A, then λ must be a real number. (Hint: using part (a) with X being an eigenvector.) (c) A has exactly n real eigenvalues (counting multiplicities).

8