<<

10.1098/rspa.2002.1040

Useof geometric :comp ound matrices andthe determinantof the sumof two matrices

ByUwePrells 1,MichaelI.Friswell 1 andSeamusD.Garvey 2 1DynamicResearch ,School ofEngineering, University ofWales Swansea, Swansea SA28PP, UK 2School ofMechanical, Materials, ManufacturingEngineering and Management, The University ofNottingham, University Park,Nottingham NG7 2RD, UK

Received14 February 2002; revised7 June2002; accepted24 June2002; publishedonline 14 November2002

In this paper wedemonstrate the capabilities of geometric algebra by the derivation of aformula for the of the sum of two matrices in which both matrices are separated in the sense that the resulting expression consists of asum of traces of products of their compound matrices. For the derivation weintroduce avector of Grassmann elements associated with an arbitrary square , werecall the concept of compound matrices and summarize some of their properties. This paper introduces anewderiv ation and interpretation of the relationship between p-forms and the pth compound matrix, and it demonstrates the use of geometric algebra, which has the potential to be applied to awide range of problems. Keywords:geometric algebra; ; compoundmatrices

1.Introduction Many interesting questions are related to the determinant of matrices (Krattenthaler &Zeilberger 1970), and in particular to the determinant of the sum of two matri- ces(Li &Mathias 1995; Bebiano et al.1994). Indeed, some of these still remain unanswered (Barret &Jarvis 1992; Bebiano 1994). Anatural generalization of deter- minants are compound matrices which are closely related to the presentation of geometric algebra. Aswith geometric algebra, the concept of compound matrices has been neglected for many years, and its potential in theoretical analysis and prac- tical application has been widely underestimated (see, for example, Nambiar 1997; Nambiar &Keating 1970; Malik 1970; Linsay &Rooney 1992; Fuchs 1992; Mitrouli &Koukouvinos 1997). For example, Nambiar &Keating (1970) conclude their paper by saying

while afewnetwork theorists certainly seem to be aware of the poten- tialities here, largely compound matrices seem to beaneglected subject. Even an outstanding book like Gantmacher [(1959)] makes only acasual mention of it.

Proc.R. Soc. Lond. A (2003) 459, 273{285 °c 2002 TheRoyal Society 273 274 U.Prells andothers

Since the early books of Cullis (1913, 1918, 1925) and the often-quoted book of Aitken (1954), no recent book has re®ected the modern application capabilities of the concept of compound matrices in depth. Wehope that this paper encourages mathematicians, physicists and engineers to revive the concept of compound matrices within the framework of geometric algebra and use it practically. However, the main scope of this paper is to present aderivation of aformula for the determinant of the sum of two matrices. This formula is equivalent to the application of the Laplace expansion theorem on the determinant of two matrices (Marcus 1975), although it is not asimple task to use this relation. Theform of the expression presented in this paper appears to be more accessible than previous versions and the derivation itself motivates further study of the relationship between geometric algebra and compound matrices. Ofcourse, since the determinant of an n £ n matrix A is identical to its nth compound matrix C n(A),the Binet{Cauchy theorem provides one way of separation:

In In det(A + B) = det A; In = n( A; In ) n : (1.1) µ ·B ¸ ¶ C C µ ·B ¸ ¶ £ ¤ £ ¤ 2n Obviously,the right-hand side is ascalar product of two n -dimensional vectors. Butis it possible to express this product in terms of¡ the¢ compound matrices of A and B?Theanswer is a¯ rmative. Toderive acorresponding formula weneed some de­nitions and notation. In the next section the basics of Grassmann algebra are recalled and in x 3the relation to compound matrices is highlighted. Themain result is given in x 4. Section 5contains some examples.

2.Grassmann algebra n Wede­ne the Grassmann algebra n(C) := p= 0 Vp(C),brie®y written as , as the n G G direct sum of np := p -dimensional vector spacesL Vp over the ­eld C V0. The n ² of G is 2 -dimensional¡ ¢ and is spanned by the basis elements

f³ 1; ³ 2; : : : ; ³ n+ 1; ³ n+ 2; : : : ; ³ 2n g := f1; e1; : : : ; en; e1 ^ e2; : : : ; e1 ^ ¢ ¢ ¢ ^ eng; (2.1) where ei is the ith column of the n-dimensional identity matrix In and the Grassmann product (exterior or ) is de­ned by

ek ^ ei + ei ^ ek = 0; (2.2) which is aspecial case of the general geometric or Cli¬ord product

ek °B ei + ei °B ek = B(ei; ek); (2.3) where B(ei; ek)is an arbitrary bilinear form (Porteous 1995). Theabove de­nition of the Grassmann product implies ei ^ ei =0. Thebasis element e := e e ; 1 6 i < < i 6 n; (2.4) i1¢¢¢ip i1 ^ ¢ ¢ ¢ ^ ip 1 ¢ ¢ ¢ p is called the basic p-form, basic p-vector or p-. Thesubsequence of all p-forms in (2.1) is ordered lexicographically,which means that ifwe realize i1 ¢ ¢ ¢ ip as the digits of anumber with asu¯ ciently high basis, then the order is increasing from

Proc.R. Soc. Lond. A (2003) Use ofge ometricalgebra 275

1 ¢ ¢ ¢ p to (n ¡ p + 1) ¢ ¢ ¢ n.Becausethe basis is ordered, (2.1) can bewritten uniquely as a vector ³ (0) 0³ (1) 1 ³ := ; (2.5) . B . C B C B³ (n)C @ A where for p 2 f0; ng the np-dimensional vector ³ (p) contains all basis elements of Vp and is associated with p-forms by

e1¢¢¢p . ³ (p)1 0 . 1 . np ³ (p) := 0 . 1 := B ei1 ip C : (2.6) . B ¢¢¢ C 2 G ³ B . C B (p)np C B . C @ A B C Ben p+ 1 nC @ ¡ ¢¢¢ A In particular wehave ³ ³ = 1 and ³ ³ n = e .The 0-forms are scalars, (0) ² 1 (n) ² 2 1¢¢¢n the 1-forms are vectors, the 2-forms are bi-vectors, and so on. Atypical element g 2 G is called aGrassmann element or Grassmann and is alinear combination of scalars gi 2 C with all basis elements, which can bealternatively written as 2n n T T g = gi³ i = g ³ = g(p)³ (p); (2.7) Xi= 1 Xp= 0 n 2 np where gi is the ith element of the vector g 2 C and where the vectors g(p) 2 C are associated with the p-form part of g, that is

T [g]p := g(p)³ (p) 2 Vp: (2.8)

Since np = nn¡p,the vectors ³ (p) and ³ (n p) have the same dimension. Weextend ¡ T the de­nition of the exterior product to the product ³ (p) ³ of two vectors of ^ (n¡p) Grassmann , analogous to the dyadic product of two vectors as the compo- nentwise exterior product, that is T ³ (p) ³ ^ (n¡p)

e1¢¢¢p . 0 . 1

= B ei1 ip C e1 n p; : : : ; eip+1 in ; : : : ; ep+ 1 n B ¢¢¢ C ^ ¢¢¢ ¡ ¢¢¢ ¢¢¢ B . C ¡ ¢ B . C B C Ben p+ 1 nC @ ¡ ¢¢¢ A e e : : : e e : : : e e 1¢¢¢p ^ 1¢¢¢n¡p 1¢¢¢p ^ ip+1 ¢¢¢in 1¢¢¢p ^ p+ 1¢¢¢n . . . 2 . . . 3

:= 6 ei1 ip e1 n p : : : ei1 ip eip+1 in : : : ei1 ip ep+ 1 n 7 6 ¢¢¢ ^ ¢¢¢ ¡ ¢¢¢ ^ ¢¢¢ ¢¢¢ ^ ¢¢¢ 7 6 . . . 7 6 . . . 7 6 7 6en p+ 1 n e1 n p : : : en p+ 1 n eip+1 in : : : en p+ 1 n ep+ 1 n7 4 ¡ ¢¢¢ ^ ¢¢¢ ¡ ¡ ¢¢¢ ^ ¢¢¢ ¡ ¢¢¢ ^ ¢¢¢ 5 Proc.R. Soc. Lond. A (2003) 276 U.Prells andothers

1+ + p 0 (¡ 1) ¢¢¢ : : 2 : 3 p(p+ 1)=2 i1+ + ip = ( 1) ( 1) ¢¢¢ e1 n ¡ 6 : ¡ 7 ¢¢¢ 6 : : 7 6 7 6( 1)n¡p+ 1+ ¢¢¢+ n 0 7 4 ¡ 5 p(p+ 1)=2 = (¡ 1) § pEp³ (n); (2.9) where the np £ np matrix Ep is the rotated identity matrix, that is 0 1 : : Ep := 2 : 3 (2.10) 1 0 4 5 ¼ pk and § p := diag[(¡ 1) ]k= 1;:::;np with ¼ pk := i1 + ¢ ¢ ¢ + ip.Without proof wenote that § = ( 1)n(n+ 1)=2E § E : (2.11) n¡p ¡ p p p For each set Spk = fi1; : : : ; ipg of indices 1 6 i1 < ¢ ¢ ¢ < ip 6 n there exists one unique set S = i ; : : : ; i of indices 1 6 i < < i 6 n such n¡pj f p+ 1 ng p+ 1 ¢ ¢ ¢ n that S S = n and S S = .Although each set is ordered, the pk [ n¡pj pk \ n¡pj ; concatenated set fi1; : : : ; ip; ip+ 1; : : : ; ing is not ordered. Now consider the n-form e e . Since i ; : : : ; i is apermutation of 1 ; : : : ; n this n-form di¬ers i1¢¢¢ip ^ ip+1 ¢¢¢in 1 n only by asign from ³ (n) = e1¢¢¢n.Thesign isnegative ifthe permutation has an odd number of transpositions, and it is positive ifthe permutation has an even number of transpositions. The result

e e = ( 1)¼ pk ¡p(p+ 1)=2e ; (2.12) i1¢¢¢ip ^ ip+1 ¢¢¢in ¡ 1¢¢¢n which motivates the de­nition of the star operator (Hodge )

e := ( 1)¼ pk ¡p(p+ 1)=2e ; (2.13) ¤ i1¢¢¢ip ¡ ip+1 ¢¢¢in is more convenient. Since ¼ pk + ¼ n¡pj = n(n + 1)=2, ashort calculation reveals e = ( 1)p(n¡p)e : (2.14) ¤ ¤ i1¢¢¢ip ¡ i1¢¢¢ip

Declaring the action of the star operator on ³ (p) componentwise

¤³ (p)1 ³ := 0 . 1 = ( 1)p(p+ 1)=2§ E ³ ; (2.15) ¤ (p) . ¡ p p (n¡p)

B ³ (p)np C @¤ A we ­ nd T p(p+ 1)=2 T ³ (p) ³ = ( 1) ³ (p) ³ Ep§ p = Inp ³ (n): (2.16) ^ ¤ (p) ¡ ^ (n¡p) Asan analogy to the scalar product of two vectors wede­ ne the product ³ T ³ := ( 1)p(p+ 1)=2³ T § E ³ (p) ^ ¤ (n¡p) ¡ (p) p p (n¡p) p(p+ 1)=2 T = ( 1) tr(Ep§ p³ (p)³ ) = np³ (n): (2.17) ¡ (n¡p)

Proc.R. Soc. Lond. A (2003) Use ofge ometricalgebra 277

Wewant to explain the action of the star operator on an arbitrary Grassmann number g.It is su¯ cient to look at the p-form part of g.Invoking the star operator we­ nd [g] := gT ³ = ( 1)p(p+ 1)=2gT E § ³ V : (2.18) ¤ p (p) ¤ (p) ¡ (p) p p (n¡p) 2 n¡p

For an arbitrary but ­xed p 2 f0; ng let g; h 2 Vp.Wecan then de­ne an inner product on Vp by

Vp £ Vp ! C h¢; ¢i ( (2.19) (g; h) 7! hg; hi := ¤(g ^ ¤h): Hence,we have

hg; hi = ¤(g ^ ¤h) T T = ¤[(g ³ (p)) ^ (h ¤ ³ (p))] T T = ¤(g ³ (p) ^ ¤³ (p)h) T = ¤(g Inp h³ (n)) = gTh (2.20) because ¤³ (n) =1. This concludes the recollection of the basics of Grassmann algebra. In the next section wewill explore its relationship to compound matrices.

3.Compound matrices

n£m 6 6 Let A 2 C , m n, and for p m let fSpk gk= 1;:::;np denote the sequence of lex- icographically ordered sets of combinations without repetition of p out of n and for m mp := p let fTpigi= 1;:::;mp denote the sequence of lexicographically ordered sets of combinations¡ ¢ without repetition of p out of m.Moreover, de­ne A(Spk ;Tpi) as the p £ p matrix resulting from selecting from A all rows with indices in Spk and all columns with indices in Tpi.Finally,wede­ ne c (A) := det(A ) C; (3.1) pki (Spk;Tpi) 2 and then the matrix np mp C p(A) 2 C £ ; (3.2) which has the element cpki(A) in row k and column i,is called the pth compound matrix of A (Aitken 1954,p p. 90). This matrix is sometimes called the pth exterior power of A, denoted by ^ A (Marcus 1973, p.117). Compound matrices have many useful properties. Wewill mention only some of them. (Details can be found in Aitken (1954).) For A Cn£m, B Cm£`, let n n 2 2 p 6 min(n; m; `) and let X 2 C £ .Thefollowing properties are then found to be true.

(i) (Binet{Cauchy theorem) C p(AB) = C p(A)C p(B). (ii) (A) = r < m ifand only if C p(A) = 0 for all p > r and C p(A) 6= 0 for p 6 r.

(iii) If A is unitary,then C p(A)is unitary.

Proc.R. Soc. Lond. A (2003) 278 U.Prells andothers

(iv) C p(In) = Inp .

T T (v) C p(A ) = C p(A) . p (vi) C p(¶ A) = ¶ C p(A).

(vii) If X is Hermitian, then C p(X)is Hermitian.

(viii) If X is skew-Hermitian, then C p(X)is skew-Hermitian if p is odd and is Her- mitian if p is even.

(ix) If X is diagonal, then C p(X)is diagonal.

(x) If X is upper (lower) triangular, then C p(X)is upper (lower) triangular (Mitrouli &Koukouvinos 1997, pp. 97{98).

(xi) C 1(X) = X.

(xii) C n(X) = det(X).

n¡ 1 (m 1) (xiii) (Silvester{Franke theorem) det( C m(X)) = det(X) ¡ (Marcus 1973, p. 130). (xiv) If wede­ ne the kth adjugate compound X ad k of X by

X ad k := § E (X)TE § ; k kC n¡k k k ad k then X X = det(X)Ink (Aitken 1954, p. 91).

i (xv) If wede­ ne S := § 1 = diag(¼ 1i)i= 1;:::;n, where ¼ 1i := (¡ 1) ,then for all i 2 f2; : : : ; ng we have C i(S) = § i. Remark 3.1. The term adjugate is used in place of adjoint to avoid confusion with the Hermitian adjoint of acomplex matrix (Horn &Johnson 1985, p. 20). The kth adjugate compound is sometimes also called the kth supplementary compound (Marcus 1975, p. 136).

n Wewant to calculate the Grassmann product of the m 6 n column vectors ai 2 C of A = [a1; : : : ; am].

ai11 : : : ai1m a := a a = det 0 . . 1 e 1¢¢¢m 1 ^ ¢ ¢ ¢ ^ m . . i1¢¢¢im i1

= det(A(Smk ;m))³ (m)k kX= 1 T = C m(A) ³ (m): (3.3)

nm Note that C m(A) 2 C is avector. Wewould like to generalize this relation between multiple Grassmann products and compound matrices. Tothis end wede­ ne the map

Proc.R. Soc. Lond. A (2003) Use ofge ometricalgebra 279

n n 2n ¿ : C £ ! G by 1 1 0 x1 1 0 x1 1 . . B . C B . C B C B C B x C B x C B n C B n C B x C B x x C ¿ 0(X) B 12 C B 1 ^ 2 C B . C B . C 0¿ 1(X)1 ¿ (X) := B . C := B . C =: ; (3.4) B C B C . Bx C B x x C B . C B n¡1nC B n¡1 ^ n C B C B C B C B¿ (X)C B x123 C B x1 x2 x3 C @ n A B C B ^ ^ C B . C B . C B . C B . C B C B C B . C B . C B . C B . C B C B C B x C Bx x C @ 1¢¢¢n A @ 1 ^ ¢ ¢ ¢ ^ nA where for every p 2 f0; : : : ; ng wedenote the p-form part of ¿ (X) by ¿ p(X), that is

x1¢¢¢p . ¿ p(X) = 0 . 1 Bx C @ n¡p+ 1¢¢¢nA T = C p(X) ³ (p): (3.5)

T In particular wehave ¿ 0(X) = 1, ¿ 1(X) = X ³ (1) and ¿ n(X) = det(X)³ (n). As an analogy to the star operator wede­ ne

¿ 0¤ (X) 0¿ 1¤ (X)1 ¿ ¤ (X) := . ; (3.6) B . C B C B¿ ¤ (X)C @ n A where

¿ ¤ (X) := ( 1)p(p+ 1)=2§ E ¿ (X); 1 6 p < n; (3.7) p ¡ p p n¡p ¤ ¿ 0(X) := ¿ n(X); (3.8) ¤ ¿ n(X) := ¿ 0(X): (3.9)

2n Moreover, wede­ ne an inner product on G by

n n 2 2 C G £ G ! n ; 8 9 (3.10) f¢ ¢g T ¤ <>(¿ (X); ¿ (Y )) 7! f¿ (X); ¿ (Y )g := ¤ ¿ p(X) ^ ¿ p(Y ): => Xp= 0 :> ;> Proc.R. Soc. Lond. A (2003) 280 U.Prells andothers

Wenow wish to evaluate this inner product. Using (2.12), (3.4) and (3.7){(3.9) we have n¡1 ¿ (X); ¿ (Y ) = y + x + ¿ (X)T ¿ ¤ (Y ) f g ¤ 1¢¢¢n ¤ 1¢¢¢n ¤ p ^ p Xp= 1 n¡1 = y + x + ( 1)p(p+ 1)=2¿ (X)T § E ¿ (Y ) ¤ 1¢¢¢n ¤ 1¢¢¢n ¤ ¡ p ^ p p n¡p Xp= 1 = y + x ¤ 1¢¢¢n ¤ 1¢¢¢n ¼ p1yp+ 1 n n¡1 ¢¢¢ + ( 1)p(p+ 1)=2(x ; : : : ; x ) 0 . 1 ¤ ¡ 1¢¢¢p n¡p+ 1¢¢¢n ^ . Xp= 1 B¼ pnp y1 n pC @ ¢¢¢ ¡ A = y + x ¤ 1¢¢¢n ¤ 1¢¢¢n n¡1 + ( 1)p(p+ 1)=2(¼ x y + : : : ¤ ¡ p1 1¢¢¢p ^ p+ 1¢¢¢n Xp= 1 + ¼ x y ) pnp n¡p+ 1¢¢¢n ^ 1¢¢¢n¡p = y + x ¤ 1¢¢¢n ¤ 1¢¢¢n n¡1 + (x y + + y x ) ¤ 1¢¢¢p ^ p+ 1¢¢¢n ¢ ¢ ¢ 1¢¢¢n¡p ^ n¡p+ 1¢¢¢n Xp= 1 = (x + x y + + y x + x y + : : : ¤ 1¢¢¢n 1¢¢¢n¡1 ^ n ¢ ¢ ¢ 1 ^ 2:::n 1¢¢¢n¡2 ^ n¡1n + y x + : : : + x y + + y x 12 ^ 3:::n ¢ ¢ ¢ 12 ^ 3:::n ¢ ¢ ¢ 1¢¢¢n¡2 ^ n¡1n + y x + + x y + y ) 1¢¢¢n¡1 ^ n ¢ ¢ ¢ 1 ^ 2:::n 1¢¢¢n = ¤(x1 + y1) ^ ¢ ¢ ¢ ^ (xn + yn) = det(X + Y ): (3.11) Hence,we have proven the following theorem. Let , be arbitrary matrices. Then Theorem3.2. X Y n £ n det(X + Y ) = f¿ (X); ¿ (Y )g; (3.12) where the map is de¯ned by (3.4) and the inner product isde¯ned by (3.10). ¿ f¢; ¢g 4.Main theorem Weare now in the position to relate the determinant of the sum of two matrices to their compound matrices. Using (3.5) in (3.11) and (2.9) we­ nd

n¡1 det(X + Y ) = det(Y ) + det(X) + ( 1)p(p+ 1)=2¿ (X)T § E ¿ (Y ) ¤ ¡ p ^ p p n¡p Xp= 1 = det(Y ) + det(X)

n¡1 + ( 1)p(p+ 1)=2(³ T (X)) (§ E (Y )T³ ) ¤ ¡ (p)C p ^ p pC n¡p (n¡p) Xp= 1

Proc.R. Soc. Lond. A (2003) Use ofge ometricalgebra 281

= det(Y ) + det(X)

n¡1 p(p+ 1)=2 T T + ( 1) tr( p(X) ³ (p) ³ n p(Y )Ep§ p) ¤ ¡ C ^ (n¡p)C ¡ Xp= 1 n¡1 = det(Y ) + det(X) + tr( (X)T§ E (Y )E § ) C p p pC n¡p p p Xp= 1 n ad p = tr(Y C p(X)); (4.1) Xp= 0 with the de­nition C 0(X)=1for arbitrary X 6=0. This may be summarized as the following theorem. Let , be arbitrary matrices. Thedeterminant of the Theorem4.1. X Y n £ n sum of X and Y is given by

n ad i det(X + Y ) = tr(X C i(Y )); (4.2) Xi= 0 where denotes the th compound matrix of and ad i denotes the th adjugate C i(Y ) i Y X i compound matrix of X. Wenow demonstrate that this expression is indeed invariant with respect to the interchange of X and Y ,that is det( X + Y ) = det(Y + X).Reversing the summation order by setting p = n ¡ i and recalling that E = E and § = ( 1)n(n+ 1)=2E § E ; i n¡i n¡p ¡ p p p we have

n¡1 n¡1 tr( (X)T§ E (Y )E § ) = tr( (X)T§ E (Y )E § ) C p p pC (n¡p) p p C n¡i n¡i iC i i n¡i Xp= 1 Xi= 1 n¡1 = tr( (X)TE § (Y )§ E ) C n¡i i iC i i i Xi= 1

n¡1 = tr( (Y )T§ E (X)E § ): (4.3) C i i iC n¡i i i Xi= 1 Equation (4.1) can berearranged in several ways. With referenceto the de­nition of the adjugate compound of amatrix wemay write

n¡2 ad ad ad i det(X + Y ) = det(X) + det(Y ) + tr(X Y ) + tr(Y X) + tr(X C i(Y )): Xi= 2 (4.4) If wedistinguish between even and odd dimensions of n by introducing the Gauss bracket [r],which is the smallest greater than or equal to the non-negative

Proc.R. Soc. Lond. A (2003) 282 U.Prells andothers r, then n = 2[n=2] ¡ n mod2and wehave det(X + Y ) = det(X) + det(Y ) T + (1 ¡ n mod 2) tr(C [n=2](Y ) § [n=2]E[n=2]C [n=2](X)E[n=2]§ [n=2]) [n=2]¡1 ad i ad i + tr(X C i(Y ) + Y C i(X)); (4.5) Xi= 1 which obviously is invariant with respect to the interchange of X and Y . Remark 4.2. Theabove result isindeed equivalent to the formula given by Mar- n cus (1975, pp. 145, 163). Heused the notation det( X + Y ) = i= 0 trace Ri(X; Y ), ad i where Ri(X; Y ) := X C i(Y )is called the ith compound ReissP matrix of X and Y .This result is also consistent with the equivalent formulae for the determinant of the sum of several matrices presented by Amitsur (1980) and by Reutenauer & Schutzenberger (1987). Theseformulae consist of polynomial expressions in tr C i(W ), where W is aproduct of powers of all involved matrices. For the special case of a sum of two matrices these formulae consist of 2 n terms rather than n + 1 terms as in the formula above. In the ­nal section wewill study some examples.

5. Examples (a) Determinantsof rank de¯ cient matrices n n n T Example5.1. Let y; z 2 C and X 2 C £ . Since C i(yz ) = 0 for i > 1 we ­ nd det(X + yzT) = det(X) + tr(X ad yzT) = det(X) + zTX ad y; (5.1) which is awell-known formula (see, for example, Lancaster &Tismenetsky 1985, p. 65).

Example5.2. Now suppose that Y = [y1; y2], Z = [z1; z2]and both have rank 2. Then T T ad T ad 2 det(X + Y Z ) = det(X) + tr(Z X Y ) + C 2(Z) X C 2(Y ): (5.2) Note that the last term is abilinear form because C 2(Y ) and C 2(Z)are vectors of dimension n2 = n(n ¡ 1)=2. Example5.3. Anecessary condition for the sum of two singular n £ n matrices X and Y to be non-singular is that the sum of their ranks is greater or equal to n.For the special case rank( X) = m, rank(Y ) = n ¡ m,wewant to explore the su¯ cient condition det( X + Y ) = 0. Let X = ABT, Y = CDT, where A; B Cn£m, n (n m) 6 2 C; D 2 C £ ¡ with rank(A) = rank(B) = m and rank(C) = rank(D) = n ¡ m. Wethen know that C i(X) = 0 for all i > m and C i(Y ) = 0 for all i > n ¡ m. Hence we have det(X + Y ) = det(ABT + CDT) = tr( (ABT)T§ E (CDT)E § ) C m m mC n¡m m m = tr( (B) (A)T§ E (C) (D)TE § ) C m C m m mC n¡m C n¡m m m = (A)T§ E (C) (D)TE § (B); (5.3) C m m mC n¡m C n¡m m mC m which is the product of two bilinear forms.

Proc.R. Soc. Lond. A (2003) Use ofge ometricalgebra 283

Example5.4. Asa special case of the preceding example wewant to calculate the determinant of the partitioned matrix A C A C P := = Im; 0 + 0; In m ; (5.4) ·B D¸ ·B¸ ·D¸ ¡ £ ¤ £ ¤ =: X =: Y m m (n m) m | {zm (n} m)| {z (n} m) (n m) where A 2 C £ , B 2 C ¡ £ , C 2 C £ ¡ and D 2 C ¡ £ ¡ . We then have

det(P ) = det(X + Y )

T T C Im = m( A ; B )§ mEm n m n m( 0; In m )Em§ m m C C ¡ µ ·D¸ ¶C ¡ ¡ C µ · 0 ¸ ¶ £ ¤ £ ¤ T T C T = m( A ; B )§ mEm n m e Em§ me1 C C ¡ µ ·D¸ ¶ nm £ ¤ T T C T = m( A ; B )§ mEm n m e § me1 C C ¡ µ ·D¸ ¶ 1 £ ¤ m(m+ 1)=2 T T C = ( 1) m( A ; B )§ mEm n m : (5.5) ¡ C C ¡ µ ·D¸ ¶ £ ¤

(b) Determinantsinvolving one Example5.5. Wewant to ­nd an expression for det( X + ¤ ), where ¤ = diagi= 1;:::;n(¶ i).For the sake of clarity wede­ ne the ni-dimensional vectors

eTE (¤ )E e ¶ : : : ¶ 1 iC i i 1 n¡i+ 1 n 0 . 1 0 . 1 `i := . = . ; (5.6) T Be Ei i(¤ )Eieni C B ¶ 1 : : : ¶ i C @ ni C A @ A T e1 C i(X)e1 . ¹ i := 0 . 1 : (5.7) T Be i(X)eni C @ ni C A We then ­ nd

n n¡1 det(X + ¤ ) = det(X) + ¶ + tr( (¤ )T§ E (X)E § ) i C i i iC n¡i i i iY= 1 Xi= 1

n n¡1 = det(X) + ¶ + tr(E (¤ )E (X)) i iC i iC n¡i iY= 1 Xi= 1

n n¡1 T = det(X) + ¶ i + `i ¹ n¡i: (5.8) iY= 1 Xi= 1 Thefollowing two examples are special cases of this expression.

Proc.R. Soc. Lond. A (2003) 284 U.Prells andothers

n n Example5.6. Let X 2 C £ .Thecharacteristic polynomial of X is then given by n¡1 det(X ¶ I ) = det(X) + ( 1)n¶ n + ( 1)i¶ i tr (X): (5.9) ¡ n ¡ ¡ C n¡i Xi= 1 Example5.7. Asa special case of the above expression wehave

n¡1 det(X + In) = 1 + det(X) + tr C i(X): (5.10) Xi= 1

6.Conclusions Wehave demonstrated the capabilities of geometric algebra by presenting what is, to the best of our knowledge, anewderiv ation of aformula for the determinant of the sum of two matrices. This derivation includes an introduction to the relationship between Grassmann products and compound matrices and motivates further study. Several examples of applications of the derived formula have been presented. Theauthors acknowledgefunding bythe Engineering andPhysical Sciences Research Council throughthe twolinked grants, nosGR/ M93062 andGR/ M93079, entitled `Theapplication of geometric algebra to secondorder dynamic systems in engineering’ .

Nomenclature

N set of the natural numbers C ­eld of complex numbers Cn n-dimensional over C n m C £ set of all complex n £ m matrices ei vector of zeros except a1in position i n ni := i := n!=[i!(n ¡ i)!] In identity¡ ¢ matrix of order n n := f1; : : : ; ng fSpigi= 1;:::;np sequence of lexicographically ordered sets of combinations without repetition of p indices out of n

¼ pi := j j2 Spi P ^ wedge, exterior or Grassmann product (: : : )T transposition x := x x , i n, x Cn for all j n i1¢¢¢ip i1 ^ ¢ ¢ ¢ ^ ip ` 2 j 2 2

References Aitken, A. C.1954 Determinantsand matrices ,8th edn. Edinburgh: Oliver &Boyd. Amitsur, S.A. 1980On the characteristic polynomial of asum of matrices. Linear 8, 177{182.

Proc.R. Soc. Lond. A (2003) Use ofge ometricalgebra 285

Barret, W.W.&Jarvis, T.J. 1992Spectral properties of amatrix of Redhe®er. Applic. 162{164, 673{683. Bebiano, N.1994New developments onthe Marcus{Oliveira conjecture. Linear AlgebraApplic. 197{198, 793{803. Bebiano, N., Li, C.-K. &DaProvidencia, J.1994Determinant of the sum of asymmetric and askew-. SIAMJ.MatrixAnalysis Appl. 18, 74{82. Cullis, C.E.1913 Matricesand determinoids ,vol. 1. Cambridge University Press. Cullis, C.E.1918 Matricesand determinoids ,vol. 2. Cambridge University Press. Cullis, C.E.1925 Matricesand determinoids ,vol. 3, part I.Cambridge University Press. Fuchs, M.B.1992The explicit inverse of the sti® ness matrix. Int.J. SolidsStruct. 29, 2101{ 2113. Gantmacher, F.R.1959 Thetheory ofmatrices ,vols 1and2. New York: Chelsea. Horn, R.A.&Johnson, Ch.R. 1985 Matrixanalysis .Cambridge University Press. Krattenthaler, C.&Zeilberger, D.1970Proof of adeterminant evaluation conjectured by Bombieri, Hunt andvan der Poorten. NewYork J. Math. 3, 54{102. Lancaster, P.&Tismenetsky,M.1985 Theory ofmatrices ,2ndedn. Academic. Li, C.-K. &Mathias, R.1995The determinant of the sum of twomatrices. Bull.Austral. Math. Soc. 52, 425{429. Linsay,K. A.&Rooney,C.E.1992A note oncompound matrices. J.Computat.Phys. 103, 472{477. Malik, R.N.1970Compound matrices to the tree-generating problem. IEEETrans.Circuit Theory 17, 149{151. Marcus, M.1973 Finitedimensional multilinear algebra ,part I.New York: Marcel Dekker. Marcus, M.1975 Finitedimensional multilinear algebra ,part II.New York: Marcel Dekker. Mitrouli, M.&Koukouvinos, C.1997On the computation of the Smith normal form of com- poundmatrices. Numer. 16, 95{105. Nambiar, K.K.1997Hall’ stheorem andcompound matrices. Math.Comput. Modelling 25, 23{24. Nambiar, K.K.&Keating, J. D.1970Application of compoundmatrices to linear systems. IEEETrans.Circuit Theory 17, 626{628. Porteous, I.R.1995 Cli®ord algebrasand theclassical groups .In Cambridge Studies in Advanced Mathematics. Cambridge University Press. Reutenauer, Ch.& Schutzenberger, M.-P.1987A formula for the determinant of asum of matrices. Lett.Math. Phys. 13, 299{302.

Proc.R. Soc. Lond. A (2003)