Determinants of Commuting-Block Matrices by Istvan Kovacs, Daniel S
Total Page:16
File Type:pdf, Size:1020Kb
Determinants of Commuting-Block Matrices by Istvan Kovacs, Daniel S. Silver*, and Susan G. Williams* Let R beacommutative ring, and Matn(R) the ring of n × n matrices over R.We (i,j) can regard a k × k matrix M =(A ) over Matn(R)asablock matrix,amatrix that has been partitioned into k2 submatrices (blocks)overR, each of size n × n. When M is regarded in this way, we denote its determinant by |M|.Wewill use the symbol D(M) for the determinant of M viewed as a k × k matrix over Matn(R). It is important to realize that D(M)isann × n matrix. Theorem 1. Let R be acommutative ring. Assume that M is a k × k block matrix of (i,j) blocks A ∈ Matn(R) that commute pairwise. Then | | | | (1,π(1)) (2,π(2)) ··· (k,π(k)) (1) M = D(M) = (sgn π)A A A . π∈Sk Here Sk is the symmetric group on k symbols; the summation is the usual one that appears in the definition of determinant. Theorem 1 is well known in the case k =2;the proof is often left as an exercise in linear algebra texts (see [4, page 164], for example). The general result is implicit in [3], but it is not widely known. We present a short, elementary proof using mathematical induction on k.Wesketch a second proof when the ring R has no zero divisors, a proof that is based on [3] and avoids induction by using the fact that commuting matrices over an algebraically closed field can be simultaneously triangularized. Proof.Weuse induction on k. The case k =1is evident. We suppose that (1) is true for k − 1 and then prove it for k. Observe that the following matrix equation holds. IO··· O IO··· O A(1,1) ∗∗∗ − (2,1) ··· (1,1) ··· A I O OA O O . . M = . , . ··· . .. N −A(k,1) O ··· I OO··· A(1,1) O where N is a (k − 1) × (k − 1) matrix. For the sake of notation, we write this as (2) PQM = R, *The second and third authors are partially supported by National Science Foundation Grant DMS-9704399 1 where the symbols are defined appropriately. By the multiplicative property of determi- nants we have D(PQM)=D(P )D(Q)D(M)=(A(1,1))k−1D(M) and D(R)=A(1,1)D(N). Hence we have (A(1,1))k−1D(M)=A(1,1)D(N). Take the determinant of both sides of the last equation. Using |D(N)| = |N|,aconsequence of the induction hypothesis, together with (2) we find |A(1,1)|k−1|D(M)| = |A(1,1)||D(N)| = |A(1,1)||N| = |R| = |P ||Q||M| = |A(1,1)|k−1|M|. If |A(1,1)| is not zero nor a zero divisor, then we can divide the sides by |A(1,1)|k−1 to get (1). For the general case, we embed R in the polynomial ring R[z], where z is an indeterminant, and replace A(1,1) by the matrix zI + A(1,1). Since the determinant of zI + A(1,1) is a monic polynomial of degree n, and hence is nonzero and not a zero divisor, equation (1) again holds. Substituting z =0(equivalently, equating constant terms of both sides) yields the desired result. We sketch an alternative proof of Theorem 1 when R has no zero divisors, a proof suggested to us by the referee. It is based on ideas of [3] (see also [1]). If R is a commutative ring with no zero divisors, then we can embed it in its quotient field and then pass to the algebraic closure F .Wenow regard the blocks A(i,j) as operators on the vector space F n, and M as an operator on the tensor product V = F n ⊗ F k. Since the blocks A(i,j) commute pairwise, there exists a basis for F n with respect to which each A(i,j) is upper triangular (see [2], for example). We form the tensor product of such a basis with the standard one for F k, thereby constructing a new basis for V . The change of basis has the effect on M of simultaneously triangularizing each block. Thus it suffices to assume that each block A(i,j) is upper triangular. The matrix M is permutation-similar to a k × k block matrix M˜ =(A˜p,q) such that (i,j) (i,j) A˜p,q =(Ap,q )isann × n matrix consisting of the p, q-entries of the A . Since each (i,j) ˜ | ˜ | | ˜ |···|˜ | A is upper triangular, Ap,q =0whenever p>q. Hence M = A1,1 An,n = n (1,π(1)) (k,π(k)) (i,j) ∈ (sgn π)Ar,r ···Ar,r . Since each A is upper triangular, the last r=1 π Sk n (1,π(1)) (k,π(k)) product is equal to ∈ (sgn π)(A ···A )r,r . But this is equal to r=1 π Sk | (sgn π)A(1,π(1)) ...A(k,π(k))|. Hence equation (1) holds. π∈Sk The second proof shows that the commutativity hypotheses of Theorem 1 can be replaced by the weaker condition that the blocks can be simultaneously triangularized. However, some hypothesis about the blocks is certainly needed for the conclusion of the theorem to hold, as the reader can see by considering the matrix M below. 1000 0001 M = 0100 0010 We conclude by describing a class of block matrices that satisfy the commutativity hypothesis of Theorem 1. Matrices of this type arose in [5], and were the original motivation for this investigation. Let p(i,j)(t)bepolynomials, 1 ≤ i, j ≤ k, and let N be an n × n 2 matrix. All coefficients are in R, which can be taken to be the field of complex numbers, if the reader desires. Since the matrices p(i,j)(N) commute pairwise, the block matrix p(1,1)(N) ··· p(1,k)(N) . M = . .. p(k,1)(N) ··· p(k,k)(N) satisfies the hypothesis of Theorem 1. In fact, using the theorem we can say something about the determinant of M. Let p(t)bethe determinant of p(1,1)(t) ··· p(1,k)(t) . . .. , p(k,1)(t) ··· p(k,k)(t) and let ζ1,...,ζn be the (not necessarily distinct) eigenvalues of N.Weleave the proof of the following assertion as an exercise. n |M| = p(ζr). r=1 Bibliography [1] R. Bhatia, R. A. Horn and F. Kittaneh, Normal approximants to binormal operators, Linear Alg. Appl. 147 (1991), 169 – 179. [2] R. R. Halmos, Finite-dimensional Vector Spaces, Springer-Verlag, New York, 1993. [3] R. A. Horn, Solution to Problem 96–11, SIAM Review 39 (1997), 518 – 519. [4] K. Hoffman and R. Kunze, Linear Algebra, Prentice-Hall, New Jersey, 1971. [5] D. S. Silver and S. G. Williams, Coloring link diagrams with a continuous palette, Topology,toappear. Department of Mathematics and Statistics University of South Alabama Mobile, AL 36688 e-mail: [email protected], [email protected] 3.