MATRIX CALCULUS and KRONECKER PRODUCT - a Practical Approach to Linear and Multilinear Algebra (Second Edition) © World Scientific Publishing Co

Total Page:16

File Type:pdf, Size:1020Kb

MATRIX CALCULUS and KRONECKER PRODUCT - a Practical Approach to Linear and Multilinear Algebra (Second Edition) © World Scientific Publishing Co January 20, 2011 14:43 World Scientific Book - 9in x 6in book Chapter 1 Matrix Calculus 1.1 Definitions and Notation We assume that the reader is familiar with some basic terms in linear alge- bra such as vector spaces, linearly dependent vectors, matrix addition and matrix multiplication (Horn and Johnson [30], Laub [39]). Throughout we consider matrices over the field of complex numbers C or real number R. Let z 2 C with z = x + iy and x; y 2 R: Thenz ¯ = x − iy: In some cases we restrict the underlying field to the real numbers R: The matrices are denoted by A; B; C; D; X; Y . The matrix elements (entries) of the matrix A are denoted by ajk. For the column vectors we write u, v, w. The zero column vector is denoted by 0. Let A be a matrix. Then AT denotes the transpose and A¯ is the complex conjugate matrix. We call A∗ the adjoint matrix, where A∗ := A¯T . A special role is played by the n × n matrices, i.e. the square matrices. In this case we also say the matrix is of order n. In denotes the n × n unit matrix (also called identity matrix). The zero matrix is denoted by 0. Let V be a vector space of finite dimension n, over the field R of real num- bers, or the field C of complex numbers. If there is no need to distinguish between the two, we speak of the field F of scalars.A basis of V is a set f g n e1; e2;:::; en of n linearly independent vectors of V , denoted by (ej)j=1. Every vector v 2 V then has the unique representation Xn v = vjej j=1 the scalars vj, which we will sometimes denote by (v)j, being the compo- nents of the vector v relative to the basis (e)j. As long as a basis is fixed 1 MATRIX CALCULUS AND KRONECKER PRODUCT - A Practical Approach to Linear and Multilinear Algebra (Second Edition) © World Scientific Publishing Co. Pte. Ltd. http://www.worldscibooks.com/mathematics/8030.html January 20, 2011 14:43 World Scientific Book - 9in x 6in book 2 Matrix Calculus and Kronecker Product unambiguously, it is thus always possible to identify V with Fn. In matrix notation, the vector v will always be represented by the column vector 0 1 v1 B C Bv2 C v = B . C @ . A vn while vT and v∗ will denote the following row vectors T ∗ v = (v1; v2; : : : ; vn); v = (¯v1; v¯2;:::; v¯n) whereα ¯ is the complex conjugate of α. The row vector vT is the transpose of the column vector v, and the row vector v∗ is the conjugate transpose of the column vector v. Definition 1.1. Let Cn be the familiar n dimensional vector space. Let u; v 2 Cn. Then the scalar (or inner) product is defined as Xn (u; v) := u¯jvj: j=1 Obviously (u; v) = (v; u) and (u1 + u2; v) = (u1; v) + (u2; v): Since u and v are considered as column vectors, the scalar product can be written in matrix notation as (u; v) ≡ u∗v: Definition 1.2. Two vectors u; v 2 Cn are called orthogonal if (u; v) = 0: Example 1.1. Let ( ) ( ) 1 1 u = ; v = : 1 −1 Then (u; v) = 0. | MATRIX CALCULUS AND KRONECKER PRODUCT - A Practical Approach to Linear and Multilinear Algebra (Second Edition) © World Scientific Publishing Co. Pte. Ltd. http://www.worldscibooks.com/mathematics/8030.html January 20, 2011 14:43 World Scientific Book - 9in x 6in book Matrix Calculus 3 The scalar product induces a norm of u defined by p kuk := (u; u): In section 1.14 a detailed discussion of norms is given. Definition 1.3. A vector u 2 Cn is called normalized if (u; u) = 1. Example 1.2. The vectors ( ) ( ) 1 1 1 1 u = p ; v = p 2 1 2 −1 are normalized and form an orthonormal basis in the vector space R2. | Let V and W be two vector spaces over the same field, equipped with n m bases (ej)j=1 and (fi)i=1, respectively. Relative to these bases, a linear transformation A : V ! W is represented by the matrix having m rows and n columns 0 1 a11 a12 ··· a1n B C B a21 a22 ··· a2n C A = B . C : @ . .. A am1 am2 ··· amn The elements aij of the matrix A are defined uniquely by the relations Xm Aej = aijfi; j = 1; 2; : : : ; n: i=1 Equivalently, the jth column vector 0 1 a1j B C B a2j C B . C @ . A amj A m of the matrix A represents the vector ej relative to the basis (fi)i=1. We call (ai1 ai2 : : : ain) the ith row vector of the matrix A. A matrix with m rows and n columns is called a matrix of type (m; n), and the vector space over the field F consisting of matrices of type (m; n) with elements in F is denoted by Am;n. MATRIX CALCULUS AND KRONECKER PRODUCT - A Practical Approach to Linear and Multilinear Algebra (Second Edition) © World Scientific Publishing Co. Pte. Ltd. http://www.worldscibooks.com/mathematics/8030.html January 20, 2011 14:43 World Scientific Book - 9in x 6in book 4 Matrix Calculus and Kronecker Product A column vector is then a matrix of type (m; 1) and a row vector a matrix of type (1; n). A matrix is called real or complex according whether its elements are in the field R or the field C. A matrix A with elements aij is written as A = (aij) the first index i always designating the row and the second, j, the column. Definition 1.4. A matrix with all its elements 0 is called the zero matrix or null matrix. ∗ Definition 1.5. Given a matrix A 2 Am;n(C), the matrix A 2 An;m(C) denotes the adjoint of the matrix A and is defined uniquely by the relations ∗ n m (Au; v)m = (u;A v)n for every u 2 C ; v 2 C ∗ which imply that (A )ij =a ¯ji. T Definition 1.6. Given a matrix A = Am;n(R), the matrix A 2 An;m(R) denotes the transpose of a matrix A and is defined uniquely by the relations T n m (Au; v)m = (u;A v)n for every u 2 R ; v 2 R T which imply that (A )ij = aji. To the composition of linear transformations there corresponds the multi- plication of matrices. Definition 1.7. If A = (aij) is a matrix of type (m; l) and B = (bkj ) of type (l; n), their matrix product AB is the matrix of type (m; n) defined by Xl (AB)ij = aikbkj: k=1 We have (AB)T = BT AT ; (AB)∗ = B∗A∗: Note that AB =6 BA, in general, where A and B are n × n matrices. Definition 1.8. A matrix of type (n; n) is said to be square, or a matrix of order n if it is desired to make explicit the integer n; it is convenient to speak of a matrix as rectangular if it is not necessarily square. Definition 1.9. If A = (aij) is a square matrix, the elements aii are called diagonal elements, and the elements aij; i =6 j, are called off-diagonal ele- ments. MATRIX CALCULUS AND KRONECKER PRODUCT - A Practical Approach to Linear and Multilinear Algebra (Second Edition) © World Scientific Publishing Co. Pte. Ltd. http://www.worldscibooks.com/mathematics/8030.html January 20, 2011 14:43 World Scientific Book - 9in x 6in book Matrix Calculus 5 Definition 1.10. The identity matrix (also called unit matrix) is the square matrix I := (δij): Definition 1.11. A square matrix A is invertible if there exists a matrix (which is unique, if it does exist), written as A−1 and called the inverse of the matrix A, which satisfies AA−1 = A−1A = I: Otherwise, the matrix is said to be singular. Recall that if A and B are invertible matrices then (AB)−1 = B−1A−1; (AT )−1 = (A−1)T ; (A∗)−1 = (A−1)∗: Definition 1.12. A square matrix A is symmetric if A is real and A = AT . The sum of two symmetric matrices is again a symmetric matrix. Definition 1.13. A square matrix A is skew-symmetric if A is real and A = −AT . Every square matrix A over R can be written as sum of a symmetric matrix S and a skew-symmetric matrix T , i.e. A = T + S. Thus 1 1 S = (A + AT ); T = (A − AT ) : 2 2 Definition 1.14. A square matrix A over C is Hermitian if A = A∗. The sum of two Hermitian matrices is again a Hermitian matrix. Definition 1.15. A square matrix A over C is skew-Hermitian if A = −A∗. The sum of two skew-Hermitian matrices is again a skew-Hermitian matrix. Definition 1.16. A Hermitian n × n matrix A is positive semidefinite if u∗Ax ≥ 0 for all nonzero u 2 Cn. Example 1.3. The 2 × 2 matrix ( ) 1 1 A = 1 1 is positive semidefinite. | MATRIX CALCULUS AND KRONECKER PRODUCT - A Practical Approach to Linear and Multilinear Algebra (Second Edition) © World Scientific Publishing Co. Pte. Ltd. http://www.worldscibooks.com/mathematics/8030.html January 20, 2011 14:43 World Scientific Book - 9in x 6in book 6 Matrix Calculus and Kronecker Product Let B be an arbitrary m × n matrix over C.
Recommended publications
  • Block Kronecker Products and the Vecb Operator* CORE View
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Block Kronecker Products and the vecb Operator* Ruud H. Koning Department of Economics University of Groningen P.O. Box 800 9700 AV, Groningen, The Netherlands Heinz Neudecker+ Department of Actuarial Sciences and Econometrics University of Amsterdam Jodenbreestraat 23 1011 NH, Amsterdam, The Netherlands and Tom Wansbeek Department of Economics University of Groningen P.O. Box 800 9700 AV, Groningen, The Netherlands Submitted hv Richard Rrualdi ABSTRACT This paper is concerned with two generalizations of the Kronecker product and two related generalizations of the vet operator. It is demonstrated that they pairwise match two different kinds of matrix partition, viz. the balanced and unbalanced ones. Relevant properties are supplied and proved. A related concept, the so-called tilde transform of a balanced block matrix, is also studied. The results are illustrated with various statistical applications of the five concepts studied. *Comments of an anonymous referee are gratefully acknowledged. ‘This work was started while the second author was at the Indian Statistiral Institute, New Delhi. LINEAR ALGEBRA AND ITS APPLICATIONS 149:165-184 (1991) 165 0 Elsevier Science Publishing Co., Inc., 1991 655 Avenue of the Americas, New York, NY 10010 0024-3795/91/$3.50 166 R. H. KONING, H. NEUDECKER, AND T. WANSBEEK INTRODUCTION Almost twenty years ago Singh [7] and Tracy and Singh [9] introduced a generalization of the Kronecker product A@B. They used varying notation for this new product, viz. A @ B and A &3B. Recently, Hyland and Collins [l] studied the-same product under rather restrictive order conditions.
    [Show full text]
  • The Kronecker Product a Product of the Times
    The Kronecker Product A Product of the Times Charles Van Loan Department of Computer Science Cornell University Presented at the SIAM Conference on Applied Linear Algebra, Monterey, Califirnia, October 26, 2009 The Kronecker Product B C is a block matrix whose ij-th block is b C. ⊗ ij E.g., b b b11C b12C 11 12 C = b b ⊗ 21 22 b21C b22C Also called the “Direct Product” or the “Tensor Product” Every bijckl Shows Up c11 c12 c13 b11 b12 c21 c22 c23 b21 b22 ⊗ c31 c32 c33 = b11c11 b11c12 b11c13 b12c11 b12c12 b12c13 b11c21 b11c22 b11c23 b12c21 b12c22 b12c23 b c b c b c b c b c b c 11 31 11 32 11 33 12 31 12 32 12 33 b c b c b c b c b c b c 21 11 21 12 21 13 22 11 22 12 22 13 b21c21 b21c22 b21c23 b22c21 b22c22 b22c23 b21c31 b21c32 b21c33 b22c31 b22c32 b22c33 Basic Algebraic Properties (B C)T = BT CT ⊗ ⊗ (B C) 1 = B 1 C 1 ⊗ − − ⊗ − (B C)(D F ) = BD CF ⊗ ⊗ ⊗ B (C D) =(B C) D ⊗ ⊗ ⊗ ⊗ C B = (Perfect Shuffle)T (B C)(Perfect Shuffle) ⊗ ⊗ R.J. Horn and C.R. Johnson(1991). Topics in Matrix Analysis, Cambridge University Press, NY. Reshaping KP Computations 2 Suppose B, C IRn n and x IRn . ∈ × ∈ The operation y =(B C)x is O(n4): ⊗ y = kron(B,C)*x The equivalent, reshaped operation Y = CXBT is O(n3): y = reshape(C*reshape(x,n,n)*B’,n,n) H.V.
    [Show full text]
  • The Partition Algebra and the Kronecker Product (Extended Abstract)
    FPSAC 2013, Paris, France DMTCS proc. (subm.), by the authors, 1–12 The partition algebra and the Kronecker product (Extended abstract) C. Bowman1yand M. De Visscher2 and R. Orellana3z 1Institut de Mathematiques´ de Jussieu, 175 rue du chevaleret, 75013, Paris 2Centre for Mathematical Science, City University London, Northampton Square, London, EC1V 0HB, England. 3 Department of Mathematics, Dartmouth College, 6188 Kemeny Hall, Hanover, NH 03755, USA Abstract. We propose a new approach to study the Kronecker coefficients by using the Schur–Weyl duality between the symmetric group and the partition algebra. Resum´ e.´ Nous proposons une nouvelle approche pour l’etude´ des coefficients´ de Kronecker via la dualite´ entre le groupe symetrique´ et l’algebre` des partitions. Keywords: Kronecker coefficients, tensor product, partition algebra, representations of the symmetric group 1 Introduction A fundamental problem in the representation theory of the symmetric group is to describe the coeffi- cients in the decomposition of the tensor product of two Specht modules. These coefficients are known in the literature as the Kronecker coefficients. Finding a formula or combinatorial interpretation for these coefficients has been described by Richard Stanley as ‘one of the main problems in the combinatorial rep- resentation theory of the symmetric group’. This question has received the attention of Littlewood [Lit58], James [JK81, Chapter 2.9], Lascoux [Las80], Thibon [Thi91], Garsia and Remmel [GR85], Kleshchev and Bessenrodt [BK99] amongst others and yet a combinatorial solution has remained beyond reach for over a hundred years. Murnaghan discovered an amazing limiting phenomenon satisfied by the Kronecker coefficients; as we increase the length of the first row of the indexing partitions the sequence of Kronecker coefficients obtained stabilises.
    [Show full text]
  • Kronecker Products
    Copyright ©2005 by the Society for Industrial and Applied Mathematics This electronic version is for personal use and may not be duplicated or distributed. Chapter 13 Kronecker Products 13.1 Definition and Examples Definition 13.1. Let A ∈ Rm×n, B ∈ Rp×q . Then the Kronecker product (or tensor product) of A and B is defined as the matrix a11B ··· a1nB ⊗ = . ∈ Rmp×nq A B . .. . (13.1) am1B ··· amnB Obviously, the same definition holds if A and B are complex-valued matrices. We restrict our attention in this chapter primarily to real-valued matrices, pointing out the extension to the complex case only where it is not obvious. Example 13.2. = 123 = 21 1. Let A 321and B 23. Then 214263 B 2B 3B 234669 A ⊗ B = = . 3B 2BB 634221 694623 Note that B ⊗ A = A ⊗ B. ∈ Rp×q ⊗ = B 0 2. For any B , I2 B 0 B . Replacing I2 by In yields a block diagonal matrix with n copies of B along the diagonal. 3. Let B be an arbitrary 2 × 2 matrix. Then b11 0 b12 0 0 b11 0 b12 B ⊗ I2 = . b21 0 b22 0 0 b21 0 b22 139 “ajlbook” — 2004/11/9 — 13:36 — page 139 — #147 From "Matrix Analysis for Scientists and Engineers" Alan J. Laub. Buy this book from SIAM at www.ec-securehost.com/SIAM/ot91.html Copyright ©2005 by the Society for Industrial and Applied Mathematics This electronic version is for personal use and may not be duplicated or distributed. 140 Chapter 13. Kronecker Products The extension to arbitrary B and In is obvious.
    [Show full text]
  • A Constructive Arbitrary-Degree Kronecker Product Decomposition of Tensors
    A CONSTRUCTIVE ARBITRARY-DEGREE KRONECKER PRODUCT DECOMPOSITION OF TENSORS KIM BATSELIER AND NGAI WONG∗ Abstract. We propose the tensor Kronecker product singular value decomposition (TKPSVD) that decomposes a real k-way tensor A into a linear combination of tensor Kronecker products PR (d) (1) with an arbitrary number of d factors A = j=1 σj Aj ⊗ · · · ⊗ Aj . We generalize the matrix (i) Kronecker product to tensors such that each factor Aj in the TKPSVD is a k-way tensor. The algorithm relies on reshaping and permuting the original tensor into a d-way tensor, after which a polyadic decomposition with orthogonal rank-1 terms is computed. We prove that for many (1) (d) different structured tensors, the Kronecker product factors Aj ;:::; Aj are guaranteed to inherit this structure. In addition, we introduce the new notion of general symmetric tensors, which includes many different structures such as symmetric, persymmetric, centrosymmetric, Toeplitz and Hankel tensors. Key words. Kronecker product, structured tensors, tensor decomposition, TTr1SVD, general- ized symmetric tensors, Toeplitz tensor, Hankel tensor AMS subject classifications. 15A69, 15B05, 15A18, 15A23, 15B57 1. Introduction. Consider the singular value decomposition (SVD) of the fol- lowing 16 × 9 matrix A~ 0 1:108 −0:267 −1:192 −0:267 −1:192 −1:281 −1:192 −1:281 1:102 1 B 0:417 −1:487 −0:004 −1:487 −0:004 −1:418 −0:004 −1:418 −0:228C B C B−0:127 1:100 −1:461 1:100 −1:461 0:729 −1:461 0:729 0:940 C B C B−0:748 −0:243 0:387 −0:243 0:387 −1:241 0:387 −1:241 −1:853C B C B 0:417
    [Show full text]
  • Introduction to Kronecker Products
    Math 515 Fall, 2010 Introduction to Kronecker Products If A is an m × n matrix and B is a p × q matrix, then the Kronecker product of A and B is the mp × nq matrix 2 3 a11B a12B ··· a1nB 6 7 6 a21B a22B ··· a2nB 7 A ⊗ B = 6 . 7 6 . 7 4 . 5 am1B am2B ··· amnB Note that if A and B are large matrices, then the Kronecker product A⊗B will be huge. MATLAB has a built-in function kron that can be used as K = kron(A, B); However, you will quickly run out of memory if you try this for matrices that are 50 × 50 or larger. Fortunately we can exploit the block structure of Kronecker products to do many compu- tations involving A ⊗ B without actually forming the Kronecker product. Instead we need only do computations with A and B individually. To begin, we state some very simple properties of Kronecker products, which should not be difficult to verify: • (A ⊗ B)T = AT ⊗ BT • If A and B are square and nonsingular, then (A ⊗ B)−1 = A−1 ⊗ B−1 • (A ⊗ B)(C ⊗ D) = AC ⊗ BD The next property we want to consider involves the matrix-vector multiplication y = (A ⊗ B)x; where A 2 Rm×n and B 2 Rp×q. Thus A ⊗ B 2 Rmp×nq, x 2 Rnq, and y 2 Rmp. Our goal is to exploit the block structure of the Kronecker product matrix to compute y without explicitly forming (A ⊗ B). To see how this can be done, first partition the vectors x and y as 2 3 2 3 x1 y1 6 x 7 6 y 7 6 2 7 q 6 2 7 p x = 6 .
    [Show full text]
  • Block Matrices in Linear Algebra
    PRIMUS Problems, Resources, and Issues in Mathematics Undergraduate Studies ISSN: 1051-1970 (Print) 1935-4053 (Online) Journal homepage: https://www.tandfonline.com/loi/upri20 Block Matrices in Linear Algebra Stephan Ramon Garcia & Roger A. Horn To cite this article: Stephan Ramon Garcia & Roger A. Horn (2020) Block Matrices in Linear Algebra, PRIMUS, 30:3, 285-306, DOI: 10.1080/10511970.2019.1567214 To link to this article: https://doi.org/10.1080/10511970.2019.1567214 Accepted author version posted online: 05 Feb 2019. Published online: 13 May 2019. Submit your article to this journal Article views: 86 View related articles View Crossmark data Full Terms & Conditions of access and use can be found at https://www.tandfonline.com/action/journalInformation?journalCode=upri20 PRIMUS, 30(3): 285–306, 2020 Copyright # Taylor & Francis Group, LLC ISSN: 1051-1970 print / 1935-4053 online DOI: 10.1080/10511970.2019.1567214 Block Matrices in Linear Algebra Stephan Ramon Garcia and Roger A. Horn Abstract: Linear algebra is best done with block matrices. As evidence in sup- port of this thesis, we present numerous examples suitable for classroom presentation. Keywords: Matrix, matrix multiplication, block matrix, Kronecker product, rank, eigenvalues 1. INTRODUCTION This paper is addressed to instructors of a first course in linear algebra, who need not be specialists in the field. We aim to convince the reader that linear algebra is best done with block matrices. In particular, flexible thinking about the process of matrix multiplication can reveal concise proofs of important theorems and expose new results. Viewing linear algebra from a block-matrix perspective gives an instructor access to use- ful techniques, exercises, and examples.
    [Show full text]
  • Kronecker Products and Matrix Calculus in System Theory
    772 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS, VOL. CAS-25, NO. 9, ISEFTEMBER 1978 Kronecker Products and Matrix Calculus in System Theory JOHN W. BREWER I Absfrucr-The paper begins with a review of the algebras related to and are based on the developments of Section IV. Also, a Kronecker products. These algebras have several applications in system novel and interesting operator notation will ble introduced theory inclluding the analysis of stochastic steady state. The calculus of matrk valued functions of matrices is reviewed in the second part of the in Section VI. paper. This cakuhs is then used to develop an interesting new method for Concluding comments are placed in Section VII. the identification of parameters of linear time-invariant system models. II. NOTATION I. INTRODUCTION Matrices will be denoted by upper case boldface (e.g., HE ART of differentiation has many practical ap- A) and column matrices (vectors) will be denoted by T plications in system analysis. Differentiation is used lower case boldface (e.g., x). The kth row of a matrix such. to obtain necessaryand sufficient conditions for optimiza- as A will be denoted A,. and the kth colmnn will be tion in a.nalytical studies or it is used to obtain gradients denoted A.,. The ik element of A will be denoted ujk. and Hessians in numerical optimization studies. Sensitiv- The n x n unit matrix is denoted I,,. The q-dimensional ity analysis is yet another application area for. differentia- vector which is “I” in the kth and zero elsewhereis called tion formulas. In turn, sensitivity analysis is applied to the the unit vector and is denoted design of feedback and adaptive feedback control sys- tems.
    [Show full text]
  • Matrix Theory States Ecd Eef = X (D = (3) E)Ecf
    Tensor spaces { the basics S. Gill Williamson Abstract We present the basic concepts of tensor products of vectors spaces, ex- ploiting the special properties of vector spaces as opposed to more gen- eral modules. Introduction (1), Basic multilinear algebra (2), Tensor products of vector spaces (3), Tensor products of matrices (4), Inner products on tensor spaces (5), Direct sums and tensor products (6), Background concepts and notation (7), Discussion and acknowledge- ments (8). This material draws upon [Mar73] and relates to subsequent work [Mar75]. 1. Introduction a We start with an example. Let x = ( b ) 2 M2;1 and y = ( c d ) 2 M1;2 where Mm;n denotes the m × n matrices over the real numbers, R. The set of matrices Mm;n forms a vector space under matrix addition and multiplication by real numbers, R. The dimension of this vector space is mn. We write dim(Mm;n) = mn: arXiv:1510.02428v1 [math.AC] 8 Oct 2015 ac ad Define a function ν(x; y) = xy = bc bd 2 M2;2 (matrix product of x and y). The function ν has domain V1 × V2 where V1 = M2;1 and V2 = M1;2 are vector spaces of dimension, dim(Vi) = 2, i = 1; 2. The range of ν is the vector space P = M2;2 which has dim(P ) = 4: The function ν is bilinear in the following sense: (1.1) ν(r1x1 + r2x2; y) = r1ν(x1; y) + r2ν(x2; y) (1.2) ν(x; r1y1 + r2y2) = r1ν(x; y1) + r2ν(x; y2) for any r1; r2 2 R; x; x1; x2 2 V1; and y; y1; y2 2 V2: We denote the set of all such bilinear functions by M(V1;V2 : P ).
    [Show full text]
  • Matrices, Jordan Normal Forms, and Spectral Radius Theory∗
    Matrices, Jordan Normal Forms, and Spectral Radius Theory∗ René Thiemann and Akihisa Yamada February 23, 2021 Abstract Matrix interpretations are useful as measure functions in termina- tion proving. In order to use these interpretations also for complexity analysis, the growth rate of matrix powers has to examined. Here, we formalized an important result of spectral radius theory, namely that the growth rate is polynomially bounded if and only if the spectral radius of a matrix is at most one. To formally prove this result we first studied the growth rates of matrices in Jordan normal form, and prove the result that every com- plex matrix has a Jordan normal form by means of two algorithms: we first convert matrices into similar ones via Schur decomposition, and then apply a second algorithm which converts an upper-triangular matrix into Jordan normal form. We further showed uniqueness of Jordan normal forms which then gives rise to a modular algorithm to compute individual blocks of a Jordan normal form. The whole development is based on a new abstract type for matri- ces, which is also executable by a suitable setup of the code generator. It completely subsumes our former AFP-entry on executable matri- ces [6], and its main advantage is its close connection to the HMA- representation which allowed us to easily adapt existing proofs on de- terminants. All the results have been applied to improve CeTA [7, 1], our certifier to validate termination and complexity proof certificates. Contents 1 Introduction 4 2 Missing Ring 5 3 Missing Permutations 9 3.1 Instantiations ..........................
    [Show full text]
  • Dimensions of Nilpotent Algebras
    DIMENSIONS OF NILPOTENT ALGEBRAS AMAJOR QUALIFYING PROJECT REPORT SUBMITTED TO THE FACULTY OF WORCESTER POLYTECHNIC INSTITUTE IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF BACHELOR OF SCIENCE WRITTEN BY KWABENA A. ADWETEWA-BADU APPROVED BY: MARCH 14, 2021 THIS REPORT REPRESENTS THE WORK OF WPI UNDERGRADUATE STUDENTS SUBMITTED TO THE FACULTY AS EVIDENCE OF COMPLETION OF A DEGREE REQUIREMENT. 1 Abstract In this thesis, we study algebras of nilpotent matrices. In the first chapter, we give complete proofs of some well-known results on the structure of a linear operator. In particular, we prove that every nilpotent linear transformation can be represented by a strictly upper triangular matrix. In Chapter 2, we generalise this result to algebras of nilpotent matrices. We prove the famous Lie-Kolchin theorem, which states that an algebra is nilpotent if and only if it is conjugate to an algebra of strictly upper triangular matrices. In Chapter 3 we introduce a family of algebras constructed from graphs and directed graphs. We characterise the directed graphs which give nilpotent algebras. To be precise, a digraph generates an algebra of nilpotency class d if and only if it is acyclic with no paths of length ≥ d. Finally, in Chapter 4, we give Jacobson’s proof of a Theorem of Schur on the maximal dimension of a subalgebra of Mn(k) of nilpotency class 2. We relate this result to problems in external graph theory, and give lower bounds on the dimension of subalgebras of nilpotency class d in Mn(k) for every pair of integers d and n.
    [Show full text]
  • Introduction to Groups, Rings and Fields
    Introduction to Groups, Rings and Fields HT and TT 2011 H. A. Priestley 0. Familiar algebraic systems: review and a look ahead. GRF is an ALGEBRA course, and specifically a course about algebraic structures. This introduc- tory section revisits ideas met in the early part of Analysis I and in Linear Algebra I, to set the scene and provide motivation. 0.1 Familiar number systems Consider the traditional number systems N = 0, 1, 2,... the natural numbers { } Z = m n m, n N the integers { − | ∈ } Q = m/n m, n Z, n = 0 the rational numbers { | ∈ } R the real numbers C the complex numbers for which we have N Z Q R C. ⊂ ⊂ ⊂ ⊂ These come equipped with the familiar arithmetic operations of sum and product. The real numbers: Analysis I built on the real numbers. Right at the start of that course you were given a set of assumptions about R, falling under three headings: (1) Algebraic properties (laws of arithmetic), (2) order properties, (3) Completeness Axiom; summarised as saying the real numbers form a complete ordered field. (1) The algebraic properties of R You were told in Analysis I: Addition: for each pair of real numbers a and b there exists a unique real number a + b such that + is a commutative and associative operation; • there exists in R a zero, 0, for addition: a +0=0+ a = a for all a R; • ∈ for each a R there exists an additive inverse a R such that a+( a)=( a)+a = 0. • ∈ − ∈ − − Multiplication: for each pair of real numbers a and b there exists a unique real number a b such that · is a commutative and associative operation; • · there exists in R an identity, 1, for multiplication: a 1 = 1 a = a for all a R; • · · ∈ for each a R with a = 0 there exists an additive inverse a−1 R such that a a−1 = • a−1 a = 1.∈ ∈ · · Addition and multiplication together: forall a,b,c R, we have the distributive law a (b + c)= a b + a c.
    [Show full text]