<<

MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006

Sherod Eubanks

HOMEWORK 2

§2.1 : 2, 5, 9, 12 §2.3 : 3, 6 §2.4 : 2, 4, 5, 9, 11

Section 2.1: Unitary Matrices

Problem 2

If λ ∈ σ(U) and U ∈ Mn is unitary, show that |λ| = 1.

Solution. If λ ∈ σ(U), U ∈ Mn is unitary, and Ux = λx for x 6= 0, then by Theorem 2.1.4(g), we have kxkCn = kUxkCn = kλxkCn = |λ|kxkCn , hence |λ| = 1, as desired.

Problem 5

Show that the matrices in Mn are orthogonal and that the permutation matrices form a sub- of the group of real orthogonal matrices. How many different permutation matrices are there in Mn?

Solution. By definition, a P ∈ Mn is called a permutation matrix if exactly one entry in each row n and column is equal to 1, and all other entries are 0. That is, letting ei ∈ C denote the standard basis n th element of C that has a 1 in the i row and zeros elsewhere, and Sn be the set of all on n th elements, then P = [eσ(1) | · · · | eσ(n)] = Pσ for some permutation σ ∈ Sn such that σ(k) denotes the k member of σ. Observe that for any σ ∈ Sn, and as ½ 1 if i = j eT e = σ(i) σ(j) 0 otherwise for 1 ≤ i ≤ j ≤ n by the definition of ei, we have that   T T eσ(1)eσ(1) ··· eσ(1)eσ(n) T  . . .  T Pσ Pσ =  . .. .  = In (= PσPσ ) T T eσ(n)eσ(1) ··· eσ(n)eσ(n) −1 T (where In denotes the n × n ). Hence Pσ = Pσ (permutation matrices are trivially nonsingular), and so Pσ is (real) orthogonal. Since the above holds for any σ ∈ Sn, it follows that any permutation matrix is orthogonal. Now, notice that In is a permutation matrix corresponding to the identity in the group Sn, so the set of all permutation matrices in Mn is (trivially) nonempty, and contains the of GLn. Moreover, by the preceding paragraph, for each σ ∈ Sn and each corresponding permutation matrix T −1 −1 Pσ, Pσ = Pσ , and observe further that Pσ = Pσ−1 , since Pσ has a 1 in column i, row σ(i), and −1 T Pσ = Pσ = Pτ has a 1 in column σ(i), row τ(σ(i)) = i for all i = 1, . . . , n. Thus τ ◦ σ = e, the identity −1 element of Sn, so τ = σ since Sn is a group. As such, the inverse () of a permutation matrix is again a permutation matrix. Finally, if ν ∈ Sn is any other permutation, then the preceding discussion

1 shows that   T T eσ(1)eν(1) ··· eσ(1)eν(n) T  . . .  Pσ Pν =  . .. .  = [e(σ◦ν)(1) | · · · | e(σ◦ν)(n)] = Pσ◦ν T T eσ(n)eν(1) ··· eσ(n)eν(n) hence as σ◦ν ∈ Sn, the product of permutation matrices is again a permutation matrix (this is rather trivial given the definition of permutation matrix, but it illustrates the connection between permutation matrices in Mn and permutations in Sn). Therefore, the set of all permutation matrices is not only a subgroup of GLn, but since each is orthogonal, such is a subgroup of the set of all orthogonal matrices as well. Moreover, the mapping σ 7→ Pσ is a bijection, hence as o(Sn) = n!, it follows that there are n! different permutation matrices in Mn (thus, the order of the subgroup in question is n!).

Problem 9

T ∗ If U ∈ Mn is unitary, show that U, U , and U are all unitary.

∗ T Solution. Let U ∈ Mn be unitary. That U is unitary follows readily from Theorem 2.1.4(d); that U is unitary follows from the fact that as the columns of U form an orthonormal set by Theorem 2.1.4(e), T T then the rows of U T form an orthonormal set. Now, since U ∗ = U is unitary, the rows of U form an orthonormal set, hence the columns of U form an orthonormal set, and thus U is unitary.

Problem 12

−1 ∗ Show that if A ∈ Mn is similar to a , then A is similar to A .

Solution. If A ∈ Mn is similar to the unitary matrix U, then there is a nonsingular matrix S such that −1 −1 −1 −1 ∗ −1 ∗ −1 U = SAS , hence AS = S U, and as such A(S U S) = S UU S = S S = In. Since S and U are nonsingular, S−1U ∗S is nonsingular, hence it follows that A is nonsingular (by the exercise preceding Theorem 2.1.4). Thus A−1 = S−1U ∗S, so that U ∗ = SA−1S−1, and so as U = SAS−1, it follows that U ∗ = (S−1)∗A∗S∗ = SA−1S−1, and therefore, since S∗S is nonsingular and (S−1)∗ = (S∗)−1 by the non-singularity of S, A−1 = S−1(S−1)∗A∗S∗S = (S∗S)−1A∗S∗S, which implies that A−1 and A∗ are similar.

Section 2.3: Schur’s Unitary Triangularization Theorem

Problem 3

Let A ∈ Mn(R). Explain why the nonreal eigenvalues of A (if any) must occur in conjugate pairs.

Solution. A simple answer to the given question is that since A ∈ Mn(R), the characteristic polynomial pA(t) has real coefficients, and hence any nonreal roots occur in conjugate pairs, it follows that any nonreal eigenvalues of A must occur in conjugate pairs. This also follows by Theorem 2.3.4, since there is a real T Q ∈ Mn(R) such that Q AQ ∈ Mn(R) where   A1 ∗    A2  QT AQ =  .  ,  .. 

0 Ak

2 and each Ai is a real 1 × 1 matrix (so Ai ∈ σ(A)), or a real 2 × 2 matrix with a nonreal pair of complex conjugate eigenvalues. Hence, since σ(A) = σ(QT AQ) by similarity, any nonreal eigenvalues of A must occur in conjugate pairs.

Problem 6

Let A, B ∈ Mn be given, and suppose A and B are simultaneously similar to upper triangular matrices; −1 −1 that is, S AS and S BS are both upper triangular for some nonsingular S ∈ Mn. Show that every eigenvalue of AB − BA must be zero.

−1 −1 Solution. Put TA = S AS and TB = S BS. Since TA and TB are upper triangular, TATB and TBTA −1 −1 −1 −1 are upper triangular, hence as TATB = S ASS BS = S ABS and similarly TBTA = S BAS. −1 −1 −1 Now, TATB − TBTA = S ABS − S BAS = S (AB − BA)S, hence as TATB and TBTA are both upper triangular, it follows further that TATB − TBTA is also upper triangular, hence the eigenvalues of AB − BA are the diagonal elements of TATB − TBTA. But, if TA = [tij], TB = [sij], then tij = sij = 0 if i > j, hence it follows that       t11 * s11 * t11s11 *  .   .   .  TATB =  ..   ..   ..  ,

0 tnn 0 snn 0 tnnsnn so the diagonal of TATB is tiisii, i = 1, . . . , n, and by a similar computation, the diagonal of TBTA is siitii (i.e. the two set of diagonal entries are the same). Therefore, the diagonal of TATB − TBTA is tiisii − siitii = 0 for all i = 1, . . . , n, which implies that every eigenvalue of AB − BA is zero, as desired.

Section 2.4: Some Implications of Schur’s Theorem

Problem 2

If A ∈ Mn, show that the rank of A is not less than the number of nonzero eigenvalues of A.

Solution. If A ∈ Mn and σ(A) = {λ1, . . . , λn}, then by Schur’s Theorem, there is a unitary matrix U ∗ such that U AU = T = [tij] where T is upper triangular and tii = λi, i = 1, . . . , n. If k of the eigenvalues of A are nonzero, then T has k nonzero and n − k zero entries along its main diagonal. As such, the k columns containing the nonzero eigenvalues of A constitute a linearly independent set (since T is upper triangular), and as such, rank(T ) ≥ k. But then rank(A) ≥ k since U is nonsingular and rank is invariant under multiplication by nonsingular matrices. Of course, we may certainly have rank(A) > k, for if · ¸ 0 1 A = , 0 0 then A is already upper triangular, and σ(A) = {0}, so even though A has no nonzero eigenvalues, rank(A) = 1 > 0.

Problem 4

Let A ∈ Mn be a nonsingular matrix. Show that any matrix that commutes with A also commutes with A−1.

3 Solution. Here, we provide two proofs of the given statement. First, if A ∈ Mn is nonsingular and −1 −1 −1 −1 AB = BA for some B ∈ Mn, then B = A BA hence BA = A B, so B commutes with A if and only if it commutes with A. Second, by Corollary 2.4.4, since A ∈ Mn is nonsingular, there is a polynomial q(t), whose coefficients depend on A and where deg(q(t)) ≤ n − 1, such that A−1 = q(A). k k−1 Put k = deg(q(t)) and write q(t) = akt + ak−1t + ··· + a1t + a0, where ak 6= 0. Now, observe that showing BA = AB implies that Bq(A) = q(A)B will prove the given statement. Note that for any p ∈ N, we have BAp = BAAp−1 = ABAp−1 = ··· = AiBAp−i = ··· = Ap−1BA = ApB, so B commutes with any positive integer power of A; as such, we compute

k k−1 q(A)B = (akA + ak−1A + ··· + a1A + a0)B k k−1 = akA B + ak−1A B + ··· + a1AB + a0IB k k−1 = akBA + ak−1BA + ··· + a1BA + a0BI k k−1 = B(akA + ak−1A + ··· + a1A + a0I) = Bq(A), and thus A−1B = BA−1, as desired.

Problem 5

Use (2.3.1) to show that if A ∈ Mn has eigenvalues λ1, λ2, ... , λn, then Xn k k λi = tr(A ), k = 1, 2,... i=1

k Solution. First, if A ∈ Mn, and σ(A) = {λ1, . . . , λn}, then letting p(t) = t for k = 1, 2,... , by Theorem k k 1.1.6 we have that p(A) = A has eigenvalues p(λi) = λi , i = 1, . . . , n. Now, by Schur’s Theorem, for ∗ k (k) each k = 1, 2,... , there is a unitary matrix Uk ∈ Mn such that Uk A Uk = Tk = [tij ] where Tk is upper (k) k triangular, and tii = λi , i = 1, . . . , n. Hence, by Problem 11 (below), as tr(AB) = tr(BA), and as the of a matrix is the sum of the eigenvalues of the matrix that Xn k ∗ k ∗ k k tr(A ) = tr(UkUk A ) = tr(Uk A Uk) = tr(Tk) = λi k = 1, 2,... i=1 as desired.

Problem 9

Let A ∈ Mn, B ∈ Mm be given and suppose A and B have no eigenvalues in common; that is, σ(A)∩σ(B) is empty. Use the Cayley-Hamilton theorem (2.4.2) to show that the equation AX − XB = 0, X ∈ Mn,m has only the solution X = 0. Deduce from this fact that the equation AX −XB = C has a unique solution X ∈ Mn,m for each given C ∈ Mn,m.

Solution. Suppose AX = XB, for A, B, and X as given above. Then, assuming that AkX = XBk for k = 1, . . . , p, we have

Ap+1X = A(ApX) = A(XBp) = (AX)Bp = (XB)Bp = XBp+1, thus by induction, AkX = XBk for all k = 1, 2,... . In this way, if p(t) is any polynomial, it follows

4 that p(A)X = Xp(B) (as in Problem 4 above). So pA(A)X = XpA(B), hence as pA(A) = 0 by the Cayley-Hamilton Theorem, we have XpA(B) = 0. But, since pA(t) = (t − λ1)(t − λ2) ··· (t − λn) where λi ∈ σ(A), i = 1, . . . , n, it follows that Yn pA(B) = (B − λiI). i=1

Moreover, the eigenvalues of the matrix pA(B) are pA(µj) for µj ∈ σ(B), j = 1, . . . , m, hence as σ(A) ∩ σ(B) = ∅, µj 6= λi for any 1 ≤ i ≤ n and 1 ≤ j ≤ m, so Yn pA(µj) = (µj − λi) 6= 0 i=1 for each j = 1, . . . , m. So, as all of the eigenvalues of pA(B) are nonzero, it follows that pA(B) is non- singular, and as such, XpA(B) = 0 has the unique solution X = 0, hence AX − XB = 0 has the unique solution X = 0. So, considering the linear transformation T : Mn,m −→ Mn,m where T (X) = AX−XB, as T (X) = 0 has the unique solution X = 0, it follows that T (X) = C has a unique solution for each C ∈ Mn,m, and the proof is complete.

Problem 11

Let A, B ∈ Mn be given and consider the commutator C = AB − BA. Show that tr(C) = 0. Consider · ¸ · ¸ 0 0 0 1 A = and B = 1 0 0 0 and show that a commutator need not be nilpotent; that is, some eigenvalues of a commutator can be nonzero, even though the sum of the eigenvalues must be zero.

Solution. First, by the definition of trace as the sum of diagonal entries, we have

tr(C) = tr(AB − BA) = tr(AB) − tr(BA), hence by Theorem 1.3.20, as the eigenvalues of AB and BA are the same (counting multiplicity), and as the trace of a matrix is also the sum of its eigenvalues, we have that tr(AB) = tr(BA), so that tr(C) = 0. Now, observe that with A and B as given above, we have · ¸ −1 0 C = AB − BA = 0 1 so that C has (nonzero) eigenvalues −1 and 1, and hence C is not nilpotent, but we see that tr(C) = −1 + 1 = 0.

5