Chapter 0 Preliminaries
Total Page:16
File Type:pdf, Size:1020Kb
Chapter 0 Preliminaries Objective of the course Introduce basic matrix results and techniques that are useful in theory and applications. It is important to know many results, but there is no way I can tell you all of them. I would like to introduce techniques so that you can obtain the results you want. This is the standard teaching philosophy of \It is better to teach students how to fish instead of just giving them fishes.” Assumption You are familiar with • Linear equations, solution sets, elementary row operations. • Matrices, column space, row space, null space, ranks, • Determinant, eigenvalues, eigenvectors, diagonal form. • Vector spaces, basis, change of bases. • Linear transformations, range space, kernel. • Inner product, Gram Schmidt process for real matrices. • Basic operations of complex numbers. • Block matrices. Notation • Mn(F), Mm;n(F) are the set of n × n and m × n matrices over F = R or C (or a more general field). n • F is the set of column vectors of length n with entries in F. • Sometimes we write Mn;Mm;n if F = C. A1 0 • If A = 2 Mm+n(F) with A1 2 Mm(F) and A2 2 Mn(F), we write A = A1 ⊕ A2. 0 A2 • xt;At denote the transpose of a vector x and a matrix A. • For a complex matrix A, A denotes the matrix obtained from A by replacing each entry by its complex conjugate. Furthermore, A∗ = (A)t. More results and examples on block matrix multiplications. 1 1 Similarity, Jordan form, and applications 1.1 Similarity Definition 1.1.1 Two matrices A; B 2 Mn are similar if there is an invertible S 2 Mn such that S−1AS = B, equivalently, A = T −1BT with T = S−1. A matrix A is diagonalizable if A is similar to a diagonal matrix. Remark 1.1.2 If A and B are similar, then det(A) = det(B) and det(zI − A) = det(zI − B). Furthermore, if S−1AS = B, then Bx = µx if and only if A(Sx) = µ(Sx). Theorem 1.1.3 Let A 2 Mn(F). Then A is diagonalizable over F if and only if it has n linearly n independent eigenvectors in F . −1 Proof. An invertible matrix S 2 Mn satisfies S AS = D = diag (λ1; : : : ; λn) if and only if AS = SD, i.e., the columns of S equal to x1; : : : ; xn. −1 Remark 1.1.4 Suppose A = SDS with D = diag (λ1; : : : ; λn). Let S have columns x1; : : : ; xn −1 t t and S have rows y1; : : : ; yj. Then t t Axj = λjxj and yjA = λjyj; j = 1; : : : ; n; and n X t A = λjxjyj: j=1 t t Thus, x1; : : : ; xn are the (right) eigenvectors of A, and y1; : : : ; yn are the left eigenvectors of A. Remark 1.1.5 For a real matrix A 2 Mn(R), we may not be able to find real eigenvalues, and we may not be able to find n linearly independent eigenvectors even if det(zI − A) = 0 has only real roots. 0 1 0 1 Example 1.1.6 Consider A = and B = . Then A has no real eigenvalues, and −1 0 0 0 B does not have two linearly independent eigenvectors. Remark 1.1.7 By the fundamental theorem of algebra, every complex polynomial can be written as a product of linear factors. Consequently, for every A 2 Mn(C), the characteristic polynomial has linear factors: det(zI − A) = (z − λ1) ··· (z − λn): So, we needs only to find n linearly independent eigenvectors in order to show that A is diagonal- izable. 2 1.2 Triangular form and Jordan form Question What simple form can we get by similarity if A 2 Mn is not diagonalizable? Theorem 1.2.1 Let A 2 Mn be such that det(zI −A) = (z −λ1) ··· (z −λn). For any permutation (j1; : : : ; jn) of (1; : : : ; n), A is similar to a matrix in (upper) triangular form with diagonal entries λj1 ; : : : ; λjn . Proof. By induction. The result clearly holds if n = 1. Suppose Ax1 = λi1 x1. Let S2 be λi1 ∗ −1 λi1 ∗ invertible with first column equal to x1. Then AS1 = S1 . Thus, S1 AS1 = . 0 A1 0 A1 −1 Note that the eigenvalues of A1 are λi2 ; : : : ; λin . By induction assumption, S2 A1S2 is in upper −1 triangular form with eigenvalues λi2 ; : : : ; λin . Let S = S1([1] ⊕ S2). Then S AS has the required form. Note that we can apply the argument to At to get an invertible T such that T −1AtT is in upper triangular form. So, S−1AS is in lower triangular form for S = T −1. Lemma 1.2.2 Suppose A 2 Mm;B 2 Mn have no common eigenvalues. Then for any C 2 Mm;n there is X 2 Mm;n such that AX − XB = C. Proof. Suppose S−1AS = A~ is in upper triangular form, and T −1BT = B~ is in lower triangular form. Multiplying the equation AX−XB = C on left and right by S and T −1, we get AY~ −Y B~ = C~ with Y = SXT −1 and C~ = SY T −1. Once solve AY~ − Y B~ = C~, we get the solution X = S−1YT for the original problem. ~ Pn ~ ~ Now, for each column colj(C) for j = 1; : : : ; n, we have colj(Y ) − i=1 Bijcoli(Y ) = colj(C). We get a system of linear equation with coefficient matrix 0 ~ 1 0A 1 B11Im : : : bn1Im ~ ~ . B . .. C In ⊗ A − B ⊗ Im = B .. C − B . C : @ A @ A A .. B~n1Im . B~nnIm Because B~ is in lower triangular form, and the upper triangular matrix A~ = S−1AS and the lower triangular matrix B~ = T −1BT have no common eigenvalues, the coefficient matrix is in upper triangular form with nonzero diagonal entries. So, the equation AY~ −Y B~ = C~ always has a unique solution. A11 A12 Theorem 1.2.3 Suppose A = 2 Mn such that A11 2 Mk;A22 2 Mn−k have no 0 A22 common eigenvalue. Then A is similar to A11 ⊕ A22. 3 Proof. By the previous lemma, there is X be such that A11X + A12 = XA22. Let S = Ik X so that AS = S(A11 ⊕ A22). The result follows. 0 In−k Corollary 1.2.4 Suppose A 2 Mn has distinct eigenvalues λ1; : : : ; λk. Then A is similar to A11 ⊕ · · · ⊕ Akk such that Ajj has (only one distinct) eigenvalue λj for j = 1; : : : ; k. Definition 1.2.5 Let Jk(λ) 2 Mk such that all the diagonal entries equal λ and all super diagonal 0λ 1 1 B .. .. C B . C entries equal 1. Then Jk(λ) = B C 2 Mk is call a (an upper triangular) Jordan block @ λ 1A λ of λ of size k. Theorem 1.2.6 Every A 2 Mn is similar to a direct sum of Jordan blocks. Proof. We may assume that A = A11 ⊕ · · · ⊕ Akk. If we can find invertible matrices S1;:::;Sk such −1 −1 that Si AiiSi is in Jordan form, then S AS is in Jordan form for S = S1 ⊕ · · · ⊕ Sk. −1 Focus on T = Aii − λiInk . If S TS is in Jordan form, then so is Aii. Now, we use a proof of Mark Wildon. https://www.math.vt.edu/people/renardym/class home/Jordan.pdf n Note: Then vectors u1; T u1;:::;T 1 u1 in the proof is called a Jordan chain. 00 0 1 21 B0 0 3 4C Example 1.2.7 Let T = B C. Then T e1 = 0; T e2 = 0; T e3 = e1 + 3e2; T e4 = 2e1 + 4e2. @0 0 0 0A 0 0 0 0 So, T (V ) = span fe1; e2g. Now, T e1 = T e2 = 0 so that e1; e2 form a Jordan basis for T (V ). Solving u1; u2 such that T (u1) = e1;T (u2) = e2, we let u1 = −2e3 + 3e4=2 and u2 = e3 − e4=2. Thus, TS = S(J2(0) ⊕ J2(0)) with 01 0 0 0 1 B0 0 1 0 C S = B C : @0 −2 0 1 A 0 3=2 0 −1=2 00 1 21 Example 1.2.8 Let T = @0 0 3A. Then T e1 = 0; T e2 = e1; T e3 = 2e1 + e2. So, T (V ) = 0 0 0 span fe1; e2g, and e2; T e2 = e1 form a Jordan basis for T (V ). Solving u1 such that T (u1) = e2, we 4 have u1 = (−2e2 + e3)=3. Thus, TS = SJ3(0) with 01 0 0 1 S = @0 1 −2=3A : 0 0 1=3 k Remark 1.2.9 The Jordan form of A can be determined by the rank/nullity of (A − λjI) for k = 1; 2;::: . So, the Jordan blocks are uniquely determined. i Let ker((A − λI) ) = `i has dimension `i. Then there are `1 Jordan blocks of λ, and there are `i − λi−1 Jordan blocks of size at least i. Example 1.2.10 Suppose A 2 M9 has distinct eigenvalues λ1; λ2; λ3 such that A − λ1I has rank 2 3 2 8, A − λ2I has rank 7, (A − λ2I) and (A − λ2I) have rank 5, A − λ3I has rank 6, (A − λ3I) and 3 (A − λ3I) have rank 5. Then the Jordan form of A is J1(λ1) ⊕ J2(λ2) ⊕ J2(λ2) ⊕ J1(λ3) ⊕ J1(λ3) ⊕ J2(λ3): 5 1.3 Implications of the Jordan form Theorem 1.3.1 Two matrices are similar if and only if they have the same Jordan form. Proof. If A and B have Jordan form J, then S−1AS = J = T −1BT for some invertible S; T so that R−1AR = B with R = ST −1.