Compound Matrices and the Determinant of the Sum of Two
Total Page:16
File Type:pdf, Size:1020Kb
10.1098/rspa.2002.1040 Useof geometric algebra: compound matrices andthe determinantof the sumof two matrices ByUwePrells 1,MichaelI.F riswell 1 andSeamusD.Garvey 2 1DynamicResearch Group,School ofEngineering, University ofWales Swansea, Swansea SA28PP, UK 2School ofMechanical, Materials, ManufacturingEngineering and Management, The University ofNottingham, University Park,Nottingham NG7 2RD, UK Received14 February 2002; revised7 June2002; accepted24 June2002; publishedonline 14 November2002 In this paper wedemonstrate the capabilities of geometric algebra by the derivation of aformula for the determinant of the sum of two matrices in which both matrices are separated in the sense that the resulting expression consists of asum of traces of products of their compound matrices. For the derivation weintroduce avector of Grassmann elements associated with an arbitrary square matrix, werecall the concept of compound matrices and summarize some of their properties. This paper introduces anewderiv ation and interpretation of the relationship between p-forms and the pth compound matrix, and it demonstrates the use of geometric algebra, which has the potential to be applied to awide range of problems. Keywords:geometric algebra; determinants; compoundmatrices 1.Introduction Many interesting questions are related to the determinant of matrices (Krattenthaler &Zeilberger 1970), and in particular to the determinant of the sum of two matri- ces(Li &Mathias 1995; Bebiano et al.1994). Indeed, some of these still remain unanswered (Barret &Jarvis 1992; Bebiano 1994). Anatural generalization of deter- minants are compound matrices which are closely related to the presentation of geometric algebra. Aswith geometric algebra, the concept of compound matrices has been neglected for many years, and its potential in theoretical analysis and prac- tical application has been widely underestimated (see, for example, Nambiar 1997; Nambiar &Keating 1970; Malik 1970; Linsay &Rooney 1992; Fuchs 1992; Mitrouli &Koukouvinos 1997). For example, Nambiar &Keating (1970) conclude their paper by saying while afewnetwork theorists certainly seem to be aware of the poten- tialities here, largely compound matrices seem to beaneglected subject. Even an outstanding book like Gantmacher [(1959)] makes only acasual mention of it. Proc.R. Soc. Lond. A (2003) 459, 273{285 °c 2002 TheRoyal Society 273 274 U.Prells andothers Since the early books of Cullis (1913, 1918, 1925) and the often-quoted book of Aitken (1954), no recent book has re®ected the modern application capabilities of the concept of compound matrices in depth. Wehope that this paper encourages mathematicians, physicists and engineers to revive the concept of compound matrices within the framework of geometric algebra and use it practically. However, the main scope of this paper is to present aderivation of aformula for the determinant of the sum of two matrices. This formula is equivalent to the application of the Laplace expansion theorem on the determinant of two matrices (Marcus 1975), although it is not asimple task to use this relation. Theform of the expression presented in this paper appears to be more accessible than previous versions and the derivation itself motivates further study of the relationship between geometric algebra and compound matrices. Ofcourse, since the determinant of an n £ n matrix A is identical to its nth compound matrix C n(A),the Binet{Cauchy theorem provides one way of separation: In In det(A + B) = det A; In = n( A; In ) n : (1.1) µ ·B ¸ ¶ C C µ ·B ¸ ¶ £ ¤ £ ¤ 2n Obviously,the right-hand side is ascalar product of two n -dimensional vectors. Butis it possible to express this scalar product in terms of¡ the¢ compound matrices of A and B?Theanswer is a¯ rmative. Toderive acorresponding formula weneed some denitions and notation. In the next section the basics of Grassmann algebra are recalled and in x 3the relation to compound matrices is highlighted. The main result is given in x 4. Section 5contains some examples. 2.Grassmann algebra n Wedene the Grassmann algebra n(C) := p= 0 Vp(C),brie®y written as , as the n G G direct sum of np := p -dimensional vector spacesL Vp over the eld C V0. The basis n ² of G is 2 -dimensional¡ ¢ and is spanned by the basis elements f³ 1; ³ 2; : : : ; ³ n+ 1; ³ n+ 2; : : : ; ³ 2n g := f1; e1; : : : ; en; e1 ^ e2; : : : ; e1 ^ ¢ ¢ ¢ ^ eng; (2.1) where ei is the ith column of the n-dimensional identity matrix In and the Grassmann product (exterior or outer product) is dened by ek ^ ei + ei ^ ek = 0; (2.2) which is aspecial case of the general geometric or Cli¬ord product ek °B ei + ei °B ek = B(ei; ek); (2.3) where B(ei; ek)is an arbitrary bilinear form (Porteous 1995). Theabove denition of the Grassmann product implies ei ^ ei =0. Thebasis element e := e e ; 1 6 i < < i 6 n; (2.4) i1¢¢¢ip i1 ^ ¢ ¢ ¢ ^ ip 1 ¢ ¢ ¢ p is called the basic p-form, basic p-vector or p-blade. Thesubsequence of all p-forms in (2.1) is ordered lexicographically,which means that ifwe realize i1 ¢ ¢ ¢ ip as the digits of anumber with asu¯ ciently high basis, then the order is increasing from Proc.R. Soc. Lond. A (2003) Use ofge ometricalgebra 275 1 ¢ ¢ ¢ p to (n ¡ p + 1) ¢ ¢ ¢ n.Becausethe basis is ordered, (2.1) can bewritten uniquely as a vector ³ (0) 0³ (1) 1 ³ := ; (2.5) . B . C B C B³ (n)C @ A where for p 2 f0; ng the np-dimensional vector ³ (p) contains all basis elements of Vp and is associated with p-forms by e1¢¢¢p . ³ (p)1 0 . 1 . np ³ (p) := 0 . 1 := B ei1 ip C : (2.6) . B ¢¢¢ C 2 G ³ B . C B (p)np C B . C @ A B C Ben p+ 1 nC @ ¡ ¢¢¢ A In particular wehave ³ ³ = 1 and ³ ³ n = e .The 0-forms are scalars, (0) ² 1 (n) ² 2 1¢¢¢n the 1-forms are vectors, the 2-forms are bi-vectors, and so on. Atypical element g 2 G is called aGrassmann element or Grassmann number and is alinear combination of scalars gi 2 C with all basis elements, which can bealternatively written as 2n n T T g = gi³ i = g ³ = g(p)³ (p); (2.7) Xi= 1 Xp= 0 n 2 np where gi is the ith element of the vector g 2 C and where the vectors g(p) 2 C are associated with the p-form part of g, that is T [g]p := g(p)³ (p) 2 Vp: (2.8) Since np = nn¡p,the vectors ³ (p) and ³ (n p) have the same dimension. Weextend ¡ T the denition of the exterior product to the product ³ (p) ³ of two vectors of ^ (n¡p) Grassmann numbers, analogous to the dyadic product of two vectors as the compo- nentwise exterior product, that is T ³ (p) ³ ^ (n¡p) e1¢¢¢p . 0 . 1 = B ei1 ip C e1 n p; : : : ; eip+1 in ; : : : ; ep+ 1 n B ¢¢¢ C ^ ¢¢¢ ¡ ¢¢¢ ¢¢¢ B . C ¡ ¢ B . C B C Ben p+ 1 nC @ ¡ ¢¢¢ A e e : : : e e : : : e e 1¢¢¢p ^ 1¢¢¢n¡p 1¢¢¢p ^ ip+1 ¢¢¢in 1¢¢¢p ^ p+ 1¢¢¢n . 2 . 3 := 6 ei1 ip e1 n p : : : ei1 ip eip+1 in : : : ei1 ip ep+ 1 n 7 6 ¢¢¢ ^ ¢¢¢ ¡ ¢¢¢ ^ ¢¢¢ ¢¢¢ ^ ¢¢¢ 7 6 . 7 6 . 7 6 7 6en p+ 1 n e1 n p : : : en p+ 1 n eip+1 in : : : en p+ 1 n ep+ 1 n7 4 ¡ ¢¢¢ ^ ¢¢¢ ¡ ¡ ¢¢¢ ^ ¢¢¢ ¡ ¢¢¢ ^ ¢¢¢ 5 Proc.R. Soc. Lond. A (2003) 276 U.Prells andothers 1+ + p 0 (¡ 1) ¢¢¢ : : 2 : 3 p(p+ 1)=2 i1+ + ip = ( 1) ( 1) ¢¢¢ e1 n ¡ 6 : ¡ 7 ¢¢¢ 6 : : 7 6 7 6( 1)n¡p+ 1+ ¢¢¢+ n 0 7 4 ¡ 5 p(p+ 1)=2 = (¡ 1) § pEp³ (n); (2.9) where the np £ np matrix Ep is the rotated identity matrix, that is 0 1 : : Ep := 2 : 3 (2.10) 1 0 4 5 ¼ pk and § p := diag[(¡ 1) ]k= 1;:::;np with ¼ pk := i1 + ¢ ¢ ¢ + ip.Without proof wenote that § = ( 1)n(n+ 1)=2E § E : (2.11) n¡p ¡ p p p For each set Spk = fi1; : : : ; ipg of indices 1 6 i1 < ¢ ¢ ¢ < ip 6 n there exists one unique set S = i ; : : : ; i of indices 1 6 i < < i 6 n such n¡pj f p+ 1 ng p+ 1 ¢ ¢ ¢ n that S S = n and S S = .Although each set is ordered, the pk [ n¡pj pk \ n¡pj ; concatenated set fi1; : : : ; ip; ip+ 1; : : : ; ing is not ordered. Now consider the n-form e e . Since i ; : : : ; i is apermutation of 1 ; : : : ; n this n-form di¬ers i1¢¢¢ip ^ ip+1 ¢¢¢in 1 n only by asign from ³ (n) = e1¢¢¢n.The sign isnegative ifthe permutation has an odd number of transpositions, and it is positive ifthe permutation has an even number of transpositions. The result e e = ( 1)¼ pk ¡p(p+ 1)=2e ; (2.12) i1¢¢¢ip ^ ip+1 ¢¢¢in ¡ 1¢¢¢n which motivates the denition of the star operator (Hodge duality) e := ( 1)¼ pk ¡p(p+ 1)=2e ; (2.13) ¤ i1¢¢¢ip ¡ ip+1 ¢¢¢in is more convenient. Since ¼ pk + ¼ n¡pj = n(n + 1)=2, ashort calculation reveals e = ( 1)p(n¡p)e : (2.14) ¤ ¤ i1¢¢¢ip ¡ i1¢¢¢ip Declaring the action of the star operator on ³ (p) componentwise ¤³ (p)1 ³ := 0 . 1 = ( 1)p(p+ 1)=2§ E ³ ; (2.15) ¤ (p) . ¡ p p (n¡p) B ³ (p)np C @¤ A we nd T p(p+ 1)=2 T ³ (p) ³ = ( 1) ³ (p) ³ Ep§ p = Inp ³ (n): (2.16) ^ ¤ (p) ¡ ^ (n¡p) Asan analogy to the scalar product of two vectors wede ne the product ³ T ³ := ( 1)p(p+ 1)=2³ T § E ³ (p) ^ ¤ (n¡p) ¡ (p) p p (n¡p) p(p+ 1)=2 T = ( 1) tr(Ep§ p³ (p)³ ) = np³ (n): (2.17) ¡ (n¡p) Proc.R.