Math 511 Advanced Linear Algebra Spring 2006
Total Page:16
File Type:pdf, Size:1020Kb
MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006 Sherod Eubanks HOMEWORK 2 x2:1 : 2; 5; 9; 12 x2:3 : 3; 6 x2:4 : 2; 4; 5; 9; 11 Section 2:1: Unitary Matrices Problem 2 If ¸ 2 σ(U) and U 2 Mn is unitary, show that j¸j = 1. Solution. If ¸ 2 σ(U), U 2 Mn is unitary, and Ux = ¸x for x 6= 0, then by Theorem 2:1:4(g), we have kxkCn = kUxkCn = k¸xkCn = j¸jkxkCn , hence j¸j = 1, as desired. Problem 5 Show that the permutation matrices in Mn are orthogonal and that the permutation matrices form a sub- group of the group of real orthogonal matrices. How many different permutation matrices are there in Mn? Solution. By definition, a matrix P 2 Mn is called a permutation matrix if exactly one entry in each row n and column is equal to 1, and all other entries are 0. That is, letting ei 2 C denote the standard basis n th element of C that has a 1 in the i row and zeros elsewhere, and Sn be the set of all permutations on n th elements, then P = [eσ(1) j ¢ ¢ ¢ j eσ(n)] = Pσ for some permutation σ 2 Sn such that σ(k) denotes the k member of σ. Observe that for any σ 2 Sn, and as ½ 1 if i = j eT e = σ(i) σ(j) 0 otherwise for 1 · i · j · n by the definition of ei, we have that 2 3 T T eσ(1)eσ(1) ¢ ¢ ¢ eσ(1)eσ(n) T 6 . 7 T Pσ Pσ = 4 . .. 5 = In (= PσPσ ) T T eσ(n)eσ(1) ¢ ¢ ¢ eσ(n)eσ(n) ¡1 T (where In denotes the n £ n identity matrix). Hence Pσ = Pσ (permutation matrices are trivially nonsingular), and so Pσ is (real) orthogonal. Since the above holds for any σ 2 Sn, it follows that any permutation matrix is orthogonal. Now, notice that In is a permutation matrix corresponding to the identity in the group Sn, so the set of all permutation matrices in Mn is (trivially) nonempty, and contains the identity element of GLn. Moreover, by the preceding paragraph, for each σ 2 Sn and each corresponding permutation matrix T ¡1 ¡1 Pσ, Pσ = Pσ , and observe further that Pσ = Pσ¡1 , since Pσ has a 1 in column i, row σ(i), and ¡1 T Pσ = Pσ = P¿ has a 1 in column σ(i), row ¿(σ(i)) = i for all i = 1; : : : ; n. Thus ¿ ± σ = e, the identity ¡1 element of Sn, so ¿ = σ since Sn is a group. As such, the inverse (transpose) of a permutation matrix is again a permutation matrix. Finally, if º 2 Sn is any other permutation, then the preceding discussion 1 shows that 2 3 T T eσ(1)eº(1) ¢ ¢ ¢ eσ(1)eº(n) T 6 . 7 Pσ Pº = 4 . .. 5 = [e(σ±º)(1) j ¢ ¢ ¢ j e(σ±º)(n)] = Pσ±º T T eσ(n)eº(1) ¢ ¢ ¢ eσ(n)eº(n) hence as σ±º 2 Sn, the product of permutation matrices is again a permutation matrix (this is rather trivial given the definition of permutation matrix, but it illustrates the connection between permutation matrices in Mn and permutations in Sn). Therefore, the set of all permutation matrices is not only a subgroup of GLn, but since each is orthogonal, such is a subgroup of the set of all orthogonal matrices as well. Moreover, the mapping σ 7! Pσ is a bijection, hence as o(Sn) = n!, it follows that there are n! different permutation matrices in Mn (thus, the order of the subgroup in question is n!). Problem 9 T ¤ If U 2 Mn is unitary, show that U, U , and U are all unitary. ¤ T Solution. Let U 2 Mn be unitary. That U is unitary follows readily from Theorem 2:1:4(d); that U is unitary follows from the fact that as the columns of U form an orthonormal set by Theorem 2:1:4(e), T T then the rows of U T form an orthonormal set. Now, since U ¤ = U is unitary, the rows of U form an orthonormal set, hence the columns of U form an orthonormal set, and thus U is unitary. Problem 12 ¡1 ¤ Show that if A 2 Mn is similar to a unitary matrix, then A is similar to A . Solution. If A 2 Mn is similar to the unitary matrix U, then there is a nonsingular matrix S such that ¡1 ¡1 ¡1 ¡1 ¤ ¡1 ¤ ¡1 U = SAS , hence AS = S U, and as such A(S U S) = S UU S = S S = In. Since S and U are nonsingular, S¡1U ¤S is nonsingular, hence it follows that A is nonsingular (by the exercise preceding Theorem 2:1:4). Thus A¡1 = S¡1U ¤S, so that U ¤ = SA¡1S¡1, and so as U = SAS¡1, it follows that U ¤ = (S¡1)¤A¤S¤ = SA¡1S¡1, and therefore, since S¤S is nonsingular and (S¡1)¤ = (S¤)¡1 by the non-singularity of S, A¡1 = S¡1(S¡1)¤A¤S¤S = (S¤S)¡1A¤S¤S, which implies that A¡1 and A¤ are similar. Section 2:3: Schur’s Unitary Triangularization Theorem Problem 3 Let A 2 Mn(R). Explain why the nonreal eigenvalues of A (if any) must occur in conjugate pairs. Solution. A simple answer to the given question is that since A 2 Mn(R), the characteristic polynomial pA(t) has real coefficients, and hence any nonreal roots occur in conjugate pairs, it follows that any nonreal eigenvalues of A must occur in conjugate pairs. This also follows by Theorem 2:3:4, since there is a real T orthogonal matrix Q 2 Mn(R) such that Q AQ 2 Mn(R) where 2 3 A1 ¤ 6 7 6 A2 7 QT AQ = 6 . 7 ; 4 .. 5 0 Ak 2 and each Ai is a real 1 £ 1 matrix (so Ai 2 σ(A)), or a real 2 £ 2 matrix with a nonreal pair of complex conjugate eigenvalues. Hence, since σ(A) = σ(QT AQ) by similarity, any nonreal eigenvalues of A must occur in conjugate pairs. Problem 6 Let A, B 2 Mn be given, and suppose A and B are simultaneously similar to upper triangular matrices; ¡1 ¡1 that is, S AS and S BS are both upper triangular for some nonsingular S 2 Mn. Show that every eigenvalue of AB ¡ BA must be zero. ¡1 ¡1 Solution. Put TA = S AS and TB = S BS. Since TA and TB are upper triangular, TATB and TBTA ¡1 ¡1 ¡1 ¡1 are upper triangular, hence as TATB = S ASS BS = S ABS and similarly TBTA = S BAS. ¡1 ¡1 ¡1 Now, TATB ¡ TBTA = S ABS ¡ S BAS = S (AB ¡ BA)S, hence as TATB and TBTA are both upper triangular, it follows further that TATB ¡ TBTA is also upper triangular, hence the eigenvalues of AB ¡ BA are the diagonal elements of TATB ¡ TBTA. But, if TA = [tij], TB = [sij], then tij = sij = 0 if i > j, hence it follows that 2 3 2 3 2 3 t11 * s11 * t11s11 * 6 . 7 6 . 7 6 . 7 TATB = 4 .. 5 4 .. 5 4 .. 5 ; 0 tnn 0 snn 0 tnnsnn so the diagonal of TATB is tiisii, i = 1; : : : ; n, and by a similar computation, the diagonal of TBTA is siitii (i.e. the two set of diagonal entries are the same). Therefore, the diagonal of TATB ¡ TBTA is tiisii ¡ siitii = 0 for all i = 1; : : : ; n, which implies that every eigenvalue of AB ¡ BA is zero, as desired. Section 2:4: Some Implications of Schur’s Theorem Problem 2 If A 2 Mn, show that the rank of A is not less than the number of nonzero eigenvalues of A. Solution. If A 2 Mn and σ(A) = f¸1; : : : ; ¸ng, then by Schur’s Theorem, there is a unitary matrix U ¤ such that U AU = T = [tij] where T is upper triangular and tii = ¸i, i = 1; : : : ; n. If k of the eigenvalues of A are nonzero, then T has k nonzero and n ¡ k zero entries along its main diagonal. As such, the k columns containing the nonzero eigenvalues of A constitute a linearly independent set (since T is upper triangular), and as such, rank(T ) ¸ k. But then rank(A) ¸ k since U is nonsingular and rank is invariant under multiplication by nonsingular matrices. Of course, we may certainly have rank(A) > k, for if · ¸ 0 1 A = ; 0 0 then A is already upper triangular, and σ(A) = f0g, so even though A has no nonzero eigenvalues, rank(A) = 1 > 0. Problem 4 Let A 2 Mn be a nonsingular matrix. Show that any matrix that commutes with A also commutes with A¡1. 3 Solution. Here, we provide two proofs of the given statement. First, if A 2 Mn is nonsingular and ¡1 ¡1 ¡1 ¡1 AB = BA for some B 2 Mn, then B = A BA hence BA = A B, so B commutes with A if and only if it commutes with A. Second, by Corollary 2:4:4, since A 2 Mn is nonsingular, there is a polynomial q(t), whose coefficients depend on A and where deg(q(t)) · n ¡ 1, such that A¡1 = q(A). k k¡1 Put k = deg(q(t)) and write q(t) = akt + ak¡1t + ¢ ¢ ¢ + a1t + a0, where ak 6= 0. Now, observe that showing BA = AB implies that Bq(A) = q(A)B will prove the given statement.