Hadamard, Khatri-Rao, Kronecker and Other Matrix Products

Total Page:16

File Type:pdf, Size:1020Kb

Hadamard, Khatri-Rao, Kronecker and Other Matrix Products INTERNATIONAL JOURNAL OF °c 2008 Institute for Scienti¯c INFORMATION AND SYSTEMS SCIENCES Computing and Information Volume 4, Number 1, Pages 160{177 HADAMARD, KHATRI-RAO, KRONECKER AND OTHER MATRIX PRODUCTS SHUANGZHE LIU AND GOTZÄ TRENKLER This paper is dedicated to Heinz Neudecker, with a®ection and appreciation Abstract. In this paper we present a brief overview on Hadamard, Khatri- Rao, Kronecker and several related non-simple matrix products and their prop- erties. We include practical applications, in di®erent areas, of some of the al- gebraic results. Key Words. Hadamard (or Schur) product, Khatri-Rao product, Tracy- Singh product, Kronecker product, vector cross product, positive de¯nite vari- ance matrix, multi-way model, linear matrix equation, signal processing. 1. Introduction Matrices and matrix operations are studied and applied in many areas such as engineering, natural and social sciences. Books written on these topics include Berman et al. (1989), Barnett (1990), LÄutkepohl (1996), Schott (1997), Magnus and Neudecker (1999), Hogben (2006), Schmidt and Trenkler (2006), and espe- cially Bernstein (2005) with a number of referred books classi¯ed as for di®erent areas in the preface. In this paper, we will in particular overview briefly results (with practical applications) involving several non-simple matrix products, as they play a very important role nowadays and deserve to receive such attention. The Hadamard (or Schur) and Kronecker products are studied and applied widely in matrix theory, statistics, system theory and other areas; see, e.g. Styan (1973), Neudecker and Liu (1995), Neudecker and Satorra (1995), Trenkler (1995), Rao and Rao (1998), Zhang (1999), Liu (2000a, 2000b) and Van Trees (2002). Horn (1990) presents an excellent survey focusing on the Hadamard product. Magnus and Neudecker (1999) include basic results and statistical applications involving Hadamard or Kronecker products. An equality connection between the Hadamard and Kronecker products seems to be ¯rstly used by e.g. Browne (1974), Pukelsheim (1977) and Faliva (1983). Trenkler (2001, 2002) and Neudecker and Trenkler (2005, 2006a) study the Kronecker and vector cross products. For partitioned matrices, the Khatri-Rao product, viewed as a generalized Hada- mard product, is discussed and used by e.g. Khatri and Rao (1968), Rao (1970), Rao and Kle®e (1988), Horn and Mathias (1992), Liu (1995, 1999, 2002a), Rao and Rao (1998), and Yang (2002a, 2002b, 2003, 2005) and his co-authors. The Tracy-Singh product, as a generalized Kronecker product, is studied by e.g. Singh Received by the editors January 15, 2007 and, in revised form, April 30, 2007. 2000 Mathematics Subject Classi¯cation. 15A45, 15A69, 62F10. This research was supported by University of Canberra and by University of Dortmund. 160 HADAMARD AND KHATRI-RAO PRODUCTS 161 (1972), Tracy and Singh (1972), Hyland and Collins (1989), Tracy and Jinadasa (1989) and Koning et al. (1991). A connection between the Khatri-Rao and Tracy- Singh products and several results involving these two products of positive de¯nite matrices with statistical applications are given by Liu (1999). In the present paper, we make an attempt to provide a brief overview on some se- lected results recently obtained on the Hadamard, Khatri-Rao, Kronecker and other products, and do not intend to present a complete list of all the existing results. We stress that the Khatri-Rao product has signi¯cant applications and potential in many areas, including matrices, mathematics, statistics (including psychometrics and econometrics), and signal processing systems. Further attention and studies on the Khatri-Rao product will prove to be useful. In Section 2, we introduce the de¯nitions of the Hadamard, Kronecker, Khatri-Rao, Tracy-Singh and vector cross products, and the Khatri-Rao and Tracy-Singh sums, with basic relations among the products. In Section 3, we present some equalities and inequalities involving positive de¯nite matrices. We collect both well-known and existing but not widely- known inequalities involving the Hadamard product; on the other hand, the results can not be extended to cover the Khatri-Rao product. We collect several known inequalities involving the Khatri-Rao product and results involving the Kronecker and vector cross products, including those to be used in Section 4. Finally we com- pile some applications of the results involving the Khatri-Rao product, as examples to illustrate how the results can be used in various areas. 2. Basic results We give the de¯nitions of these matrix products and established relations among them. We deal with only real matrices in this paper. 2.1. De¯nitions. Consider matrices A = (aij) and C = (cij ) of order m £ n and B = (bkl) of order p£q. Let A = (Aij ) be partitioned with Aij of order mi £nj as the (i; j)th block submatrix and BP= (Bkl) beP partitionedP with Bkl of orderP pk £ ql as the (k; l)th block submatrix ( mi = m, nj = n, pk = p and ql = q). The de¯nitions of the matrix products or sums of A and B are given as follows. 2.1.1. Hadamard product. (1) A ¯ C = (aijcij )ij ; where aij cij is a scalar and A ¯ C is of order m £ n. 2.1.2. Kronecker product. (2) A ­ B = (aijB)ij ; where aij B is of order p £ q and A ­ B is of order mp £ nq. 2.1.3. Khatri-Rao product. (3) A ¤ B = (Aij ­ Bij )ij; P P where Aij ­ Bij is of order mipi £ njqj and A ¤ B is of order ( mipi) £ ( njqj). 2.1.4. Tracy-Singh product. (4) A on B = (Aij on B)ij = ((Aij ­ Bkl)kl)ij; where Aij ­ Bkl is of order mipk £ njql, Aij on B is of order mip £ njq and A on B is of order mp £ nq. 162 S. LIU AND G. TRENKLER 2.1.5. Khatri-Rao sum. (5) A ¦ B = A ¤ Ip + Im ¤ B; where A = (Aij ) and B = (Bkl) are square matrices of respective order m£m and p £ p with Aij of order mi £ mj and Bkl of order pk £Ppl, Ip and ImPare compatibly partitioned identity matrices, and A ¦ B is of order ( mipi) £ ( mipi). 2.1.6. Tracy-Singh sum. (6) A¤B = A on Ip + Im on B; where A = (Aij ) and B = (Bkl) are square matrices of respective order m£m and p £ p with Aij of order mi £ mj and Bkl of order pk £ pl, Ip and Im are compatibly partitioned identity matrices, and A¤B is of order mp £ mp. 2.1.7. Vector cross product. (7) a £ b = Tab; 0 0 where a = (a1; a2; a3) and b = (b1; b2; b3) are real vectors, and 0 1 0 ¡a3 a2 Ta = @ a3 0 ¡a1 A : ¡a2 a1 0 For (1) through (4) see e.g. Liu (1999). For (3) introduced in the column-wise partitioned case, see Khatri and Rao (1968) and Rao and Rao (1998). For (5) and (6) see Al Zhour and Kilicman (2006b). For (7), see Trenkler (1998, 2001, 2002) and Neudecker and Trenkler (2005, 2006a). Note that there are various di®erent names and symbols used in the literature for the ¯ve products mentioned above; e.g. block Kronecker products by Koning et al. (1991) and Horn and Mathias (1992), in addition to some other matrix products. In particular, the Tracy-Singh product is the same as one of the block Kronecker products and the Hadamard product is the Schur product. Further, the Khatri-Rao and Tracy-Singh products share some similarities with, but also are quite di®erent from, the Hadamard and Kronecker products, respectively. Certainly the connec- tions and di®erences need attention, and we refer these to e.g. Koning et al. (1991), Horn and Mathias (1992), Wei and Zhang (2000) and Yang (2003, 2005) regarding matrix sizes, partitions and operational properties for these matrix products. In the next subsection, we report some relations connecting the products. 2.2. Relations. 2.2.1. Hadamard and Kronecker products. It is known that the Hadamard product of two matrices is the principal submatrix of the Kronecker product of the two matrices. This relation can be expressed in an equation as follows. Lemma 1. For A and C of the same order m £ n we have 0 (8) A ¯ C = J1(A ­ C)J2; 2 where J1 is the selection matrix of order m £ m and J2 is the selection matrix of order n2 £ n. HADAMARD AND KHATRI-RAO PRODUCTS 163 For this relation, see e.g. Faliva (1983) and Liu (1999). For J with properties including J0J = I and applications, see e.g. Browne (1974), Pukelsheim (1977), Liu (1995), Liu and Neudecker (1996), Neudecker et al. (1995a, b), Neudecker and Liu (2001a, b) and Schott (1997, p. 267). We next present the relation between Khatri-Rao and Tracy-Singh products which extends the above. 2.2.2. Khatri-Rao and Tracy-Singh products. Without loss of generality, we consider here 2 £ 2 block matrices µ ¶ µ ¶ A A B B (9) A = 11 12 ; B = 11 12 ; A21 A22 B21 B22 where A, A11, A12, A21, A22, B, B11, B12, B21 and B22 are m £ n, m1 £ n1, m1 £n2, m2 £n1, m2 £n2, p£q, p1 £q1, p1 £q2, p2 £q1 and p2 £q2 (m1 +m2 = m, n1 + n2 = n, p1 + p2 = p and q1 + q2 = q) matrices respectively. Further, µ ¶ µ ¶ M11 M12 N11 N12 (10) M = 0 ; N = 0 ; M12 M22 N12 N22 where M, M11, M22, N, N11 and N22 are m£m, m1 £m1, m2 £m2, p£p, p1 £p1 and p2 £ p2 symmetric matrices respectively, and M12 and N12 are m1 £ m2 and p1 £ p2 matrices respectively. De¯ne the following selection matrices Z1 of order k1 £ r and Z2 of order k2 £ s: µ ¶0 µ ¶0 I11 011 021 0 I12 012 022 0 (11) Z1 = ; Z2 = ; 0 0 0 I21 0 0 0 I22 0 0 where Z1Z1 = I1 and Z2Z2 = I2 with I11, I21, I12, I22, I1 and I2 being m1p1£m1p1, m2p2 £ m2p2, n1q1 £ n1q1, n2q2 £ n2q2, r £ r and s £ s identity matrices (k1 = mp, k2 = nq, m = m1 + m2, n = n1 + n2, p = p1 + p2, q = q1 + q2, r = m1p1 + m2p2 and s = n1q1 + n2q2), and 011, 021, 012 and 022 being m1p1 £ m1p2, m1p1 £ m2p1, n1q1 £ n1q2 and n1q1 £ n2q1 matrices of zeros.
Recommended publications
  • Introduction to Linear Bialgebra
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of New Mexico University of New Mexico UNM Digital Repository Mathematics and Statistics Faculty and Staff Publications Academic Department Resources 2005 INTRODUCTION TO LINEAR BIALGEBRA Florentin Smarandache University of New Mexico, [email protected] W.B. Vasantha Kandasamy K. Ilanthenral Follow this and additional works at: https://digitalrepository.unm.edu/math_fsp Part of the Algebra Commons, Analysis Commons, Discrete Mathematics and Combinatorics Commons, and the Other Mathematics Commons Recommended Citation Smarandache, Florentin; W.B. Vasantha Kandasamy; and K. Ilanthenral. "INTRODUCTION TO LINEAR BIALGEBRA." (2005). https://digitalrepository.unm.edu/math_fsp/232 This Book is brought to you for free and open access by the Academic Department Resources at UNM Digital Repository. It has been accepted for inclusion in Mathematics and Statistics Faculty and Staff Publications by an authorized administrator of UNM Digital Repository. For more information, please contact [email protected], [email protected], [email protected]. INTRODUCTION TO LINEAR BIALGEBRA W. B. Vasantha Kandasamy Department of Mathematics Indian Institute of Technology, Madras Chennai – 600036, India e-mail: [email protected] web: http://mat.iitm.ac.in/~wbv Florentin Smarandache Department of Mathematics University of New Mexico Gallup, NM 87301, USA e-mail: [email protected] K. Ilanthenral Editor, Maths Tiger, Quarterly Journal Flat No.11, Mayura Park, 16, Kazhikundram Main Road, Tharamani, Chennai – 600 113, India e-mail: [email protected] HEXIS Phoenix, Arizona 2005 1 This book can be ordered in a paper bound reprint from: Books on Demand ProQuest Information & Learning (University of Microfilm International) 300 N.
    [Show full text]
  • 21. Orthonormal Bases
    21. Orthonormal Bases The canonical/standard basis 011 001 001 B C B C B C B0C B1C B0C e1 = B.C ; e2 = B.C ; : : : ; en = B.C B.C B.C B.C @.A @.A @.A 0 0 1 has many useful properties. • Each of the standard basis vectors has unit length: q p T jjeijj = ei ei = ei ei = 1: • The standard basis vectors are orthogonal (in other words, at right angles or perpendicular). T ei ej = ei ej = 0 when i 6= j This is summarized by ( 1 i = j eT e = δ = ; i j ij 0 i 6= j where δij is the Kronecker delta. Notice that the Kronecker delta gives the entries of the identity matrix. Given column vectors v and w, we have seen that the dot product v w is the same as the matrix multiplication vT w. This is the inner product on n T R . We can also form the outer product vw , which gives a square matrix. 1 The outer product on the standard basis vectors is interesting. Set T Π1 = e1e1 011 B C B0C = B.C 1 0 ::: 0 B.C @.A 0 01 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 0 . T Πn = enen 001 B C B0C = B.C 0 0 ::: 1 B.C @.A 1 00 0 ::: 01 B C B0 0 ::: 0C = B. .C B. .C @. .A 0 0 ::: 1 In short, Πi is the diagonal square matrix with a 1 in the ith diagonal position and zeros everywhere else.
    [Show full text]
  • Multivector Differentiation and Linear Algebra 0.5Cm 17Th Santaló
    Multivector differentiation and Linear Algebra 17th Santalo´ Summer School 2016, Santander Joan Lasenby Signal Processing Group, Engineering Department, Cambridge, UK and Trinity College Cambridge [email protected], www-sigproc.eng.cam.ac.uk/ s jl 23 August 2016 1 / 78 Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. 2 / 78 Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. 3 / 78 Functional Differentiation: very briefly... Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. 4 / 78 Summary Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... 5 / 78 Overview The Multivector Derivative. Examples of differentiation wrt multivectors. Linear Algebra: matrices and tensors as linear functions mapping between elements of the algebra. Functional Differentiation: very briefly... Summary 6 / 78 We now want to generalise this idea to enable us to find the derivative of F(X), in the A ‘direction’ – where X is a general mixed grade multivector (so F(X) is a general multivector valued function of X). Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi. Then, provided A has same grades as X, it makes sense to define: F(X + tA) − F(X) A ∗ ¶XF(X) = lim t!0 t The Multivector Derivative Recall our definition of the directional derivative in the a direction F(x + ea) − F(x) a·r F(x) = lim e!0 e 7 / 78 Let us use ∗ to denote taking the scalar part, ie P ∗ Q ≡ hPQi.
    [Show full text]
  • Algebra of Linear Transformations and Matrices Math 130 Linear Algebra
    Then the two compositions are 0 −1 1 0 0 1 BA = = 1 0 0 −1 1 0 Algebra of linear transformations and 1 0 0 −1 0 −1 AB = = matrices 0 −1 1 0 −1 0 Math 130 Linear Algebra D Joyce, Fall 2013 The products aren't the same. You can perform these on physical objects. Take We've looked at the operations of addition and a book. First rotate it 90◦ then flip it over. Start scalar multiplication on linear transformations and again but flip first then rotate 90◦. The book ends used them to define addition and scalar multipli- up in different orientations. cation on matrices. For a given basis β on V and another basis γ on W , we have an isomorphism Matrix multiplication is associative. Al- γ ' φβ : Hom(V; W ) ! Mm×n of vector spaces which though it's not commutative, it is associative. assigns to a linear transformation T : V ! W its That's because it corresponds to composition of γ standard matrix [T ]β. functions, and that's associative. Given any three We also have matrix multiplication which corre- functions f, g, and h, we'll show (f ◦ g) ◦ h = sponds to composition of linear transformations. If f ◦ (g ◦ h) by showing the two sides have the same A is the standard matrix for a transformation S, values for all x. and B is the standard matrix for a transformation T , then we defined multiplication of matrices so ((f ◦ g) ◦ h)(x) = (f ◦ g)(h(x)) = f(g(h(x))) that the product AB is be the standard matrix for S ◦ T .
    [Show full text]
  • Block Kronecker Products and the Vecb Operator* CORE View
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Block Kronecker Products and the vecb Operator* Ruud H. Koning Department of Economics University of Groningen P.O. Box 800 9700 AV, Groningen, The Netherlands Heinz Neudecker+ Department of Actuarial Sciences and Econometrics University of Amsterdam Jodenbreestraat 23 1011 NH, Amsterdam, The Netherlands and Tom Wansbeek Department of Economics University of Groningen P.O. Box 800 9700 AV, Groningen, The Netherlands Submitted hv Richard Rrualdi ABSTRACT This paper is concerned with two generalizations of the Kronecker product and two related generalizations of the vet operator. It is demonstrated that they pairwise match two different kinds of matrix partition, viz. the balanced and unbalanced ones. Relevant properties are supplied and proved. A related concept, the so-called tilde transform of a balanced block matrix, is also studied. The results are illustrated with various statistical applications of the five concepts studied. *Comments of an anonymous referee are gratefully acknowledged. ‘This work was started while the second author was at the Indian Statistiral Institute, New Delhi. LINEAR ALGEBRA AND ITS APPLICATIONS 149:165-184 (1991) 165 0 Elsevier Science Publishing Co., Inc., 1991 655 Avenue of the Americas, New York, NY 10010 0024-3795/91/$3.50 166 R. H. KONING, H. NEUDECKER, AND T. WANSBEEK INTRODUCTION Almost twenty years ago Singh [7] and Tracy and Singh [9] introduced a generalization of the Kronecker product A@B. They used varying notation for this new product, viz. A @ B and A &3B. Recently, Hyland and Collins [l] studied the-same product under rather restrictive order conditions.
    [Show full text]
  • The Kronecker Product a Product of the Times
    The Kronecker Product A Product of the Times Charles Van Loan Department of Computer Science Cornell University Presented at the SIAM Conference on Applied Linear Algebra, Monterey, Califirnia, October 26, 2009 The Kronecker Product B C is a block matrix whose ij-th block is b C. ⊗ ij E.g., b b b11C b12C 11 12 C = b b ⊗ 21 22 b21C b22C Also called the “Direct Product” or the “Tensor Product” Every bijckl Shows Up c11 c12 c13 b11 b12 c21 c22 c23 b21 b22 ⊗ c31 c32 c33 = b11c11 b11c12 b11c13 b12c11 b12c12 b12c13 b11c21 b11c22 b11c23 b12c21 b12c22 b12c23 b c b c b c b c b c b c 11 31 11 32 11 33 12 31 12 32 12 33 b c b c b c b c b c b c 21 11 21 12 21 13 22 11 22 12 22 13 b21c21 b21c22 b21c23 b22c21 b22c22 b22c23 b21c31 b21c32 b21c33 b22c31 b22c32 b22c33 Basic Algebraic Properties (B C)T = BT CT ⊗ ⊗ (B C) 1 = B 1 C 1 ⊗ − − ⊗ − (B C)(D F ) = BD CF ⊗ ⊗ ⊗ B (C D) =(B C) D ⊗ ⊗ ⊗ ⊗ C B = (Perfect Shuffle)T (B C)(Perfect Shuffle) ⊗ ⊗ R.J. Horn and C.R. Johnson(1991). Topics in Matrix Analysis, Cambridge University Press, NY. Reshaping KP Computations 2 Suppose B, C IRn n and x IRn . ∈ × ∈ The operation y =(B C)x is O(n4): ⊗ y = kron(B,C)*x The equivalent, reshaped operation Y = CXBT is O(n3): y = reshape(C*reshape(x,n,n)*B’,n,n) H.V.
    [Show full text]
  • The Partition Algebra and the Kronecker Product (Extended Abstract)
    FPSAC 2013, Paris, France DMTCS proc. (subm.), by the authors, 1–12 The partition algebra and the Kronecker product (Extended abstract) C. Bowman1yand M. De Visscher2 and R. Orellana3z 1Institut de Mathematiques´ de Jussieu, 175 rue du chevaleret, 75013, Paris 2Centre for Mathematical Science, City University London, Northampton Square, London, EC1V 0HB, England. 3 Department of Mathematics, Dartmouth College, 6188 Kemeny Hall, Hanover, NH 03755, USA Abstract. We propose a new approach to study the Kronecker coefficients by using the Schur–Weyl duality between the symmetric group and the partition algebra. Resum´ e.´ Nous proposons une nouvelle approche pour l’etude´ des coefficients´ de Kronecker via la dualite´ entre le groupe symetrique´ et l’algebre` des partitions. Keywords: Kronecker coefficients, tensor product, partition algebra, representations of the symmetric group 1 Introduction A fundamental problem in the representation theory of the symmetric group is to describe the coeffi- cients in the decomposition of the tensor product of two Specht modules. These coefficients are known in the literature as the Kronecker coefficients. Finding a formula or combinatorial interpretation for these coefficients has been described by Richard Stanley as ‘one of the main problems in the combinatorial rep- resentation theory of the symmetric group’. This question has received the attention of Littlewood [Lit58], James [JK81, Chapter 2.9], Lascoux [Las80], Thibon [Thi91], Garsia and Remmel [GR85], Kleshchev and Bessenrodt [BK99] amongst others and yet a combinatorial solution has remained beyond reach for over a hundred years. Murnaghan discovered an amazing limiting phenomenon satisfied by the Kronecker coefficients; as we increase the length of the first row of the indexing partitions the sequence of Kronecker coefficients obtained stabilises.
    [Show full text]
  • Kronecker Products
    Copyright ©2005 by the Society for Industrial and Applied Mathematics This electronic version is for personal use and may not be duplicated or distributed. Chapter 13 Kronecker Products 13.1 Definition and Examples Definition 13.1. Let A ∈ Rm×n, B ∈ Rp×q . Then the Kronecker product (or tensor product) of A and B is defined as the matrix a11B ··· a1nB ⊗ = . ∈ Rmp×nq A B . .. . (13.1) am1B ··· amnB Obviously, the same definition holds if A and B are complex-valued matrices. We restrict our attention in this chapter primarily to real-valued matrices, pointing out the extension to the complex case only where it is not obvious. Example 13.2. = 123 = 21 1. Let A 321and B 23. Then 214263 B 2B 3B 234669 A ⊗ B = = . 3B 2BB 634221 694623 Note that B ⊗ A = A ⊗ B. ∈ Rp×q ⊗ = B 0 2. For any B , I2 B 0 B . Replacing I2 by In yields a block diagonal matrix with n copies of B along the diagonal. 3. Let B be an arbitrary 2 × 2 matrix. Then b11 0 b12 0 0 b11 0 b12 B ⊗ I2 = . b21 0 b22 0 0 b21 0 b22 139 “ajlbook” — 2004/11/9 — 13:36 — page 139 — #147 From "Matrix Analysis for Scientists and Engineers" Alan J. Laub. Buy this book from SIAM at www.ec-securehost.com/SIAM/ot91.html Copyright ©2005 by the Society for Industrial and Applied Mathematics This electronic version is for personal use and may not be duplicated or distributed. 140 Chapter 13. Kronecker Products The extension to arbitrary B and In is obvious.
    [Show full text]
  • The Classical Matrix Groups 1 Groups
    The Classical Matrix Groups CDs 270, Spring 2010/2011 The notes provide a brief review of matrix groups. The primary goal is to motivate the lan- guage and symbols used to represent rotations (SO(2) and SO(3)) and spatial displacements (SE(2) and SE(3)). 1 Groups A group, G, is a mathematical structure with the following characteristics and properties: i. the group consists of a set of elements {gj} which can be indexed. The indices j may form a finite, countably infinite, or continous (uncountably infinite) set. ii. An associative binary group operation, denoted by 0 ∗0 , termed the group product. The product of two group elements is also a group element: ∀ gi, gj ∈ G gi ∗ gj = gk, where gk ∈ G. iii. A unique group identify element, e, with the property that: e ∗ gj = gj for all gj ∈ G. −1 iv. For every gj ∈ G, there must exist an inverse element, gj , such that −1 gj ∗ gj = e. Simple examples of groups include the integers, Z, with addition as the group operation, and the real numbers mod zero, R − {0}, with multiplication as the group operation. 1.1 The General Linear Group, GL(N) The set of all N × N invertible matrices with the group operation of matrix multiplication forms the General Linear Group of dimension N. This group is denoted by the symbol GL(N), or GL(N, K) where K is a field, such as R, C, etc. Generally, we will only consider the cases where K = R or K = C, which are respectively denoted by GL(N, R) and GL(N, C).
    [Show full text]
  • Matrix Multiplication. Diagonal Matrices. Inverse Matrix. Matrices
    MATH 304 Linear Algebra Lecture 4: Matrix multiplication. Diagonal matrices. Inverse matrix. Matrices Definition. An m-by-n matrix is a rectangular array of numbers that has m rows and n columns: a11 a12 ... a1n a21 a22 ... a2n . .. . am1 am2 ... amn Notation: A = (aij )1≤i≤n, 1≤j≤m or simply A = (aij ) if the dimensions are known. Matrix algebra: linear operations Addition: two matrices of the same dimensions can be added by adding their corresponding entries. Scalar multiplication: to multiply a matrix A by a scalar r, one multiplies each entry of A by r. Zero matrix O: all entries are zeros. Negative: −A is defined as (−1)A. Subtraction: A − B is defined as A + (−B). As far as the linear operations are concerned, the m×n matrices can be regarded as mn-dimensional vectors. Properties of linear operations (A + B) + C = A + (B + C) A + B = B + A A + O = O + A = A A + (−A) = (−A) + A = O r(sA) = (rs)A r(A + B) = rA + rB (r + s)A = rA + sA 1A = A 0A = O Dot product Definition. The dot product of n-dimensional vectors x = (x1, x2,..., xn) and y = (y1, y2,..., yn) is a scalar n x · y = x1y1 + x2y2 + ··· + xnyn = xk yk . Xk=1 The dot product is also called the scalar product. Matrix multiplication The product of matrices A and B is defined if the number of columns in A matches the number of rows in B. Definition. Let A = (aik ) be an m×n matrix and B = (bkj ) be an n×p matrix.
    [Show full text]
  • A Constructive Arbitrary-Degree Kronecker Product Decomposition of Tensors
    A CONSTRUCTIVE ARBITRARY-DEGREE KRONECKER PRODUCT DECOMPOSITION OF TENSORS KIM BATSELIER AND NGAI WONG∗ Abstract. We propose the tensor Kronecker product singular value decomposition (TKPSVD) that decomposes a real k-way tensor A into a linear combination of tensor Kronecker products PR (d) (1) with an arbitrary number of d factors A = j=1 σj Aj ⊗ · · · ⊗ Aj . We generalize the matrix (i) Kronecker product to tensors such that each factor Aj in the TKPSVD is a k-way tensor. The algorithm relies on reshaping and permuting the original tensor into a d-way tensor, after which a polyadic decomposition with orthogonal rank-1 terms is computed. We prove that for many (1) (d) different structured tensors, the Kronecker product factors Aj ;:::; Aj are guaranteed to inherit this structure. In addition, we introduce the new notion of general symmetric tensors, which includes many different structures such as symmetric, persymmetric, centrosymmetric, Toeplitz and Hankel tensors. Key words. Kronecker product, structured tensors, tensor decomposition, TTr1SVD, general- ized symmetric tensors, Toeplitz tensor, Hankel tensor AMS subject classifications. 15A69, 15B05, 15A18, 15A23, 15B57 1. Introduction. Consider the singular value decomposition (SVD) of the fol- lowing 16 × 9 matrix A~ 0 1:108 −0:267 −1:192 −0:267 −1:192 −1:281 −1:192 −1:281 1:102 1 B 0:417 −1:487 −0:004 −1:487 −0:004 −1:418 −0:004 −1:418 −0:228C B C B−0:127 1:100 −1:461 1:100 −1:461 0:729 −1:461 0:729 0:940 C B C B−0:748 −0:243 0:387 −0:243 0:387 −1:241 0:387 −1:241 −1:853C B C B 0:417
    [Show full text]
  • Linear Transformations and Determinants Matrix Multiplication As a Linear Transformation
    Linear transformations and determinants Math 40, Introduction to Linear Algebra Monday, February 13, 2012 Matrix multiplication as a linear transformation Primary example of a = matrix linear transformation ⇒ multiplication Given an m n matrix A, × define T (x)=Ax for x Rn. ∈ Then T is a linear transformation. Astounding! Matrix multiplication defines a linear transformation. This new perspective gives a dynamic view of a matrix (it transforms vectors into other vectors) and is a key to building math models to physical systems that evolve over time (so-called dynamical systems). A linear transformation as matrix multiplication Theorem. Every linear transformation T : Rn Rm can be → represented by an m n matrix A so that x Rn, × ∀ ∈ T (x)=Ax. More astounding! Question Given T, how do we find A? Consider standard basis vectors for Rn: Transformation T is 1 0 0 0 completely determined by its . e1 = . ,...,en = . action on basis vectors. 0 0 0 1 Compute T (e1),T(e2),...,T(en). Standard matrix of a linear transformation Question Given T, how do we find A? Consider standard basis vectors for Rn: Transformation T is 1 0 0 0 completely determined by its . e1 = . ,...,en = . action on basis vectors. 0 0 0 1 Compute T (e1),T(e2),...,T(en). Then || | is called the T (e ) T (e ) T (e ) 1 2 ··· n standard matrix for T. || | denoted [T ] Standard matrix for an example Example x 3 2 x T : R R and T y = → y z 1 0 0 1 0 0 T 0 = T 1 = T 0 = 0 1 0 0 0 1 100 2 A = What is T 5 ? 010 − 12 2 2 100 2 A 5 = 5 = ⇒ − 010− 5 12 12 − Composition T : Rn Rm is a linear transformation with standard matrix A Suppose → S : Rm Rp is a linear transformation with standard matrix B.
    [Show full text]