An Extension of the Sherman-Morrison-Woodbury Formula ∗

Total Page:16

File Type:pdf, Size:1020Kb

An Extension of the Sherman-Morrison-Woodbury Formula ∗ 中国科技论文在线 http://www.paper.edu.cn An extension of the Sherman-Morrison-Woodbury formula ∗ Yan Zi-zong † Abstract This paper is focused on the applications of Schur complements to matrix identities and presents an extension of the Sherman-Morrison-Woodbury for- mula, which includes in a lot of matrix identities, such as Hua’s identity and its extensions. Keywords: Sherman-Morrison-Woodbury formula, Hua’s identity, Schur complement AMS subject classifications. 15A45, 15A48, 15A24 1 Introduction The well-known matrix identity (A + UV ∗)−1 = A−1 − A−1U(I + V ∗A−1U)−1V ∗A−1, (1) ia called to be the Sherman-Morrison-Woodbury matrix formula, which is usually attributed to Sherman-Morrison [8] and Woodbury [11] independently. In (1), A ∈ Ck×k, U, V ∈ Ck×p, I is an identity matrix and both the matrices A and I +V ∗A−1U are nonsingular. In mathematics (specifically linear algebra), this matrix identity says that the inverse of a lower rank correction of some matrix can be computed by doing a lower rank correction to the inverse of the original matrix. Alternative names for this formula are the matrix inversion lemma, Sherman-Morrison formula when U and V are vectors or just Woodbury formula. There are numerous applications of the Sherman-Morrison-Woodbury formula in various fields [4, 10]. For example, this formula is useful in certain numerical computations where A−1 has already been computed and it is desired to compute (A + UV )−1. With the inverse of A available, it is only necessary to find the inverse of I + VA−1U in order to obtain the result using the right-hand side of the identity. ∗Supported by the National Natural Science Foundation of China (70771080). †Department of Information and Mathematics, Yangtze University, Jingzhou, Hubei, China([email protected]). 1 中国科技论文在线 http://www.paper.edu.cn Since the inverse of I+VA−1U is easily computed, this is more efficient than inverting A + UCV directly. The Sherman-Morrison-Woodbury formula (1) implies that (I − A∗A)−1 = I + A∗(I − AA∗)−1A. (2) By the use of the formula (2), Loo-Keng Hua [5] proposed the elegant matrix identity I − B∗B = (I − B∗A)(I − A∗A)−1(I − A∗B) ∗ − (3) −(A − B)(I − AA ) 1(A − B). A short proof of the formula (3) can be found in [14]. Meanwhile, Zhang [13, 14] also presented a nice generalization of Hua’s matrix identity (5) as follows AA∗ + BB∗ = (B + AX)(I + X∗X)−1(B + AX)∗ +(A − BX∗)(I + XX∗)−1(A − BX∗)∗. (4) Recently, Yan [12] presented another extension of (3) as follows AA∗ − BB∗ = (A − BX∗)(I − X∗X)−1(A − BX∗)∗ −(B − AX)(I − XX∗)−1(B − AX)∗. (5) Our purpose in this paper is to present an extension of the Sherman-Morrison- Woodbury formula and a lot of useful matrix identities including in Hua’s identity, both using Schur complements, and we do this on Section 3 and 4, after presenting necessary background theory in Section 2. 2 Background Let M be an n × n invertible matrix partitioned as M M M = 11 12 , (6) M21 M22 in which M11 is a square k × k block with 1 ≤ k < n. Letting −1 M22.1 = M22 − M21M11 M12 denote the Schur complement of M11 in M, the Banachiewicz identity in [2] is −1 −1 −1 −1 −1 −1 −1 M11 + M11 M12M22.1M21M11 −M11 M12M22.1 M = −1 −1 −1 , (7) −M22.1M21M11 M22.1 which can be derived from the following so-called Aitken block-diagonalization for- mula −1 I 0 M11 M12 I −M11 M12 M11 0 −1 = . (8) −M21M11 I M21 M22 0 I 0 M22.1 2 中国科技论文在线 http://www.paper.edu.cn The formula (8) apparently first established explicitly by Aitken [1] and first pub- lished in 1939. When M22 is an identity matrix, the Sherman-Morrison-Woodbury formula (1) is a special and important case of (9) the following Duncan identity −1 −1 −1 −1 −1 −1 (M11 − M21M22 M12) = M11 + M11 M12M22.1M21M11 , (9) established by Duncan identity (1942) in [3]. It follows at once from the Ba- nachiewicz identity (7). Both Duncan identity (9) and the Sherman-Morrison-Woodbury formula (1) are essentially equivalent. In fact, we can acquired the Duncan identity (9) if we replace ∗ −1 −1 −1 ∗ −1 U and V by M22 UM22 and M22 V M22 in (1), respectively. These well-known matrix identities can be found in, for example, [3, 6, 9, 13]. The following lemma is interesting, which can be found in [13, 14]. Here we still present a complete proof. Lemma 2.1. Let M be a partitioned matrix defined as (6) and L 0 R R L = 11 ,R = 11 12 , L21 L22 0 R22 with the same blocks as M, and R(·) denote the column space. Suppose that the blocks L11 and R11 are invertible. If R(M12) ⊂ R(M11), (10) then (LM)22.1 = L22M22.1, (11) (AR)22.1 = M22.1R22, (12) (LMR)22.1 = L22M22.1R22. (13) In particular, if L22 = R22 = I, then (LMR)22.1 = M22.1. (14) Proof. On the assumption of L22 = R22 = I, it is obvious for that (14) is valid if (13) is true. We only need prove the result (11). Firstly we assume that M11 is invertible. Since L M L M LM = 11 11 11 12 , L21M11 + L22M21 L21M12 + L22M22 then −1 (LM)22.1 = L21M12 + L22M22 − (L21M11 + L22M21)(L11M11) L11M12 −1 = L21M12 + L22M22 − (L21M11 + L22M21)M11 M12 −1 = L22M22 − L22M21M11 M12 = L22M22.1. If M11 is singular, the condition (10) implies that the Schur complement M22.1 of M11 in A is unique, (see [14]), and R(L11M12) ⊂ R(L11M11), which shows that the Schur complement (LM)22.1 of L11M11 in LM is unique. So (11) is still valid. 3 中国科技论文在线 http://www.paper.edu.cn 3 Main results Now, the main result of this paper is the statement as follows. Theorem 3.1. Let N be an n × n matrix with the same blocks of M in (6). If the blocks M11,N11 and M11N11 + M12N21 are invertible, then M21N12 + M22N22 −1 = (M21N11 + M22N21)(M11N11 + M12N21) (M11N12 + M12N22) (15) −1 −1 −1 +M22.1(I + N21N11 M11 M12) N22.1. Proof: Letting M M N N P = 11 12 11 12 , M21 M22 N21 N22 −1 I 0 I −N11 N12 Q = −1 P , −M21M11 I 0 I then M M N 0 M N + M N M N Q = 11 12 11 = 11 11 12 21 12 22.1 , 0 M22.1 N21 N22.1 M22.1N21 M22.1N22.1 and −1 P22.1 = M21N12 + M22N22 − (M21N11 + M22N21)(M11N11 + M12N21) (M11N12 + M12N22), −1 Q22.1 = M22.1N22.1 − M22.1N21(M11N11 + M12N21) M12N22.1. On the other hand, Sherman − Morrison − W oodbury formula (1) implies −1 (M11N11 + M12N21) −1 −1 −1 −1 −1 −1 −1 −1 −1 = N11 M11 + N11 M11 M12(I + N21N11 M11 M12) N21N11 M11 . −1 −1 Let E = N21N11 M11 M12. By the use of the basic relation E(I + E)−1E − E = (I + E)−1 − I, we have −1 −1 Q22.1 = M22.1N22.1 − M22.1N21N11 M11 M12N22.1 −1 −1 −1 −1 −1 −1 −1 +M22.1N21N11 M11 M12(I + N21N11 M11 M12) N21N11 M11 M12N22.1 −1 −1 −1 = M22.1(I + N21N11 M11 M12) N22.1 From the lemma 2.1, P22.1 = Q22.1 implies the desired result. The matrix identity (15) and the Sherman−Morrison−W oodbury formula (1) are essentially equivalent. The above proof shows that the latter implies the former. Conversely, if we choose AU I 0 P = ∗ 0 I V I in the matrix identity (15), we can acquire the Sherman − Morrison − W oodbury formula (1). 4 中国科技论文在线 http://www.paper.edu.cn 4 Applications In what follows we show that many existing identities are in fact consequences of Theorem 3.1 by making special choices of different matrices P . In general, we always choose P such that P22.1 is a Hermitian matrix. The first choice of P is Y ∗ X∗ YB∗ P = ∗ , BA XA to give rise to the following matrix identity AA∗ + BB∗ = (BY + AX)(Y ∗Y + X∗X)−1(BY + AX)∗ (16) +(A∗ − XY −1B∗)(I + X(Y ∗Y )−1X∗)−1(A∗ − XY −1B∗)∗. A special case of (16) when Y is an identity matrix is the identity (4). The second choice of P is Y ∗ X∗ Y −B∗ P = ∗ , BA −XA to give rise to the following matrix identitiy AA∗ − BB∗ ∗ − ∗ ∗ − ∗ − ∗ − ∗ ∗ = (A − XY 1B )(I − X(Y Y ) 1X ) 1(A − XY 1B ) (17) ∗ ∗ − ∗ −(BY − AX)(Y Y − X X) 1(BY − AX) . A special case of (17) when A is equal to B is (I − XY −1)A∗(I − X(Y ∗Y )−1X∗)−1A(I − XY −1)∗ (18) = A(Y − X)(Y ∗Y − X∗X)−1(Y − X)∗A∗. When Y is an identity matrix, we acquire the identity (5) and (I − X)A∗(I − XX∗)−1A(I − X)∗ (19) = A(I − X)(I − X∗X)−1(I − X)∗A∗ from (17) and (18), respectively. Furthermore, we can yield the Hua’s identity (3) from the identity (5). The third choice of P is Y ∗ X∗ YA P = ∗ ∗ , B A XB to give rise to the matrix identities B∗A + A∗B = (B∗Y + A∗X)(Y ∗Y + X∗X)−1(Y ∗A + X∗B) (20) +(B − XY −1A)(I + X(Y ∗Y )−1X∗)−1(A∗ − B∗(Y ∗)−1X∗) 5 中国科技论文在线 http://www.paper.edu.cn and B∗A + A∗B = (B∗ + A∗X)(I + X∗X)−1(A + X∗B) (21) +(B − XA)(I + XX∗)−1(A∗ − B∗X∗).
Recommended publications
  • PSEUDO SCHUR COMPLEMENTS, PSEUDO PRINCIPAL PIVOT TRANSFORMS and THEIR INHERITANCE PROPERTIES∗ 1. Introduction. Let M ∈ R M×
    Electronic Journal of Linear Algebra ISSN 1081-3810 A publication of the International Linear Algebra Society Volume 30, pp. 455-477, August 2015 ELA PSEUDO SCHUR COMPLEMENTS, PSEUDO PRINCIPAL PIVOT TRANSFORMS AND THEIR INHERITANCE PROPERTIES∗ K. BISHT† , G. RAVINDRAN‡, AND K.C. SIVAKUMAR§ Abstract. Extensions of the Schur complement and the principal pivot transform, where the usual inverses are replaced by the Moore-Penrose inverse, are revisited. These are called the pseudo Schur complement and the pseudo principal pivot transform, respectively. First, a generalization of the characterization of a block matrix to be an M-matrix is extended to the nonnegativity of the Moore-Penrose inverse. A comprehensive treatment of the fundamental properties of the extended notion of the principal pivot transform is presented. Inheritance properties with respect to certain matrix classes are derived, thereby generalizing some of the existing results. Finally, a thorough discussion on the preservation of left eigenspaces by the pseudo principal pivot transformation is presented. Key words. Principal pivot transform, Schur complement, Nonnegative Moore-Penrose inverse, P†-Matrix, R†-Matrix, Left eigenspace, Inheritance properties. AMS subject classifications. 15A09, 15A18, 15B48. 1. Introduction. Let M ∈ Rm×n be a block matrix partitioned as A B , C D where A ∈ Rk×k is nonsingular. Then the classical Schur complement of A in M denoted by M/A is given by D − CA−1B ∈ R(m−k)×(n−k). This notion has proved to be a fundamental idea in many applications like numerical analysis, statistics and operator inequalities, to name a few. We refer the reader to [27] for a comprehen- sive account of the Schur complement.
    [Show full text]
  • Notes on the Schur Complement
    University of Pennsylvania ScholarlyCommons Departmental Papers (CIS) Department of Computer & Information Science 12-10-2010 Notes on the Schur Complement Jean H. Gallier University of Pennsylvania, [email protected] Follow this and additional works at: https://repository.upenn.edu/cis_papers Part of the Computer Sciences Commons Recommended Citation Jean H. Gallier, "Notes on the Schur Complement", . December 2010. Gallier, J., Notes on the Schur Complement This paper is posted at ScholarlyCommons. https://repository.upenn.edu/cis_papers/601 For more information, please contact [email protected]. Notes on the Schur Complement Disciplines Computer Sciences Comments Gallier, J., Notes on the Schur Complement This working paper is available at ScholarlyCommons: https://repository.upenn.edu/cis_papers/601 The Schur Complement and Symmetric Positive Semidefinite (and Definite) Matrices Jean Gallier December 10, 2010 1 Schur Complements In this note, we provide some details and proofs of some results from Appendix A.5 (especially Section A.5.5) of Convex Optimization by Boyd and Vandenberghe [1]. Let M be an n × n matrix written a as 2 × 2 block matrix AB M = ; CD where A is a p × p matrix and D is a q × q matrix, with n = p + q (so, B is a p × q matrix and C is a q × p matrix). We can try to solve the linear system AB x c = ; CD y d that is Ax + By = c Cx + Dy = d; by mimicking Gaussian elimination, that is, assuming that D is invertible, we first solve for y getting y = D−1(d − Cx) and after substituting this expression for y in the first equation, we get Ax + B(D−1(d − Cx)) = c; that is, (A − BD−1C)x = c − BD−1d: 1 If the matrix A − BD−1C is invertible, then we obtain the solution to our system x = (A − BD−1C)−1(c − BD−1d) y = D−1(d − C(A − BD−1C)−1(c − BD−1d)): The matrix, A − BD−1C, is called the Schur Complement of D in M.
    [Show full text]
  • The Schur Complement and H-Matrix Theory
    UNIVERSITY OF NOVI SAD FACULTY OF TECHNICAL SCIENCES Maja Nedović THE SCHUR COMPLEMENT AND H-MATRIX THEORY DOCTORAL DISSERTATION Novi Sad, 2016 УНИВЕРЗИТЕТ У НОВОМ САДУ ФАКУЛТЕТ ТЕХНИЧКИХ НАУКА 21000 НОВИ САД, Трг Доситеја Обрадовића 6 КЉУЧНА ДОКУМЕНТАЦИЈСКА ИНФОРМАЦИЈА Accession number, ANO: Identification number, INO: Document type, DT: Monographic publication Type of record, TR: Textual printed material Contents code, CC: PhD thesis Author, AU: Maja Nedović Mentor, MN: Professor Ljiljana Cvetković, PhD Title, TI: The Schur complement and H-matrix theory Language of text, LT: English Language of abstract, LA: Serbian, English Country of publication, CP: Republic of Serbia Locality of publication, LP: Province of Vojvodina Publication year, PY: 2016 Publisher, PB: Author’s reprint Publication place, PP: Novi Sad, Faculty of Technical Sciences, Trg Dositeja Obradovića 6 Physical description, PD: 6/161/85/1/0/12/0 (chapters/pages/ref./tables/pictures/graphs/appendixes) Scientific field, SF: Applied Mathematics Scientific discipline, SD: Numerical Mathematics Subject/Key words, S/KW: H-matrices, Schur complement, Eigenvalue localization, Maximum norm bounds for the inverse matrix, Closure of matrix classes UC Holding data, HD: Library of the Faculty of Technical Sciences, Trg Dositeja Obradovića 6, Novi Sad Note, N: Abstract, AB: This thesis studies subclasses of the class of H-matrices and their applications, with emphasis on the investigation of the Schur complement properties. The contributions of the thesis are new nonsingularity results, bounds for the maximum norm of the inverse matrix, closure properties of some matrix classes under taking Schur complements, as well as results on localization and separation of the eigenvalues of the Schur complement based on the entries of the original matrix.
    [Show full text]
  • The Multivariate Normal Distribution
    Multivariate normal distribution Linear combinations and quadratic forms Marginal and conditional distributions The multivariate normal distribution Patrick Breheny September 2 Patrick Breheny University of Iowa Likelihood Theory (BIOS 7110) 1 / 31 Multivariate normal distribution Linear algebra background Linear combinations and quadratic forms Definition Marginal and conditional distributions Density and MGF Introduction • Today we will introduce the multivariate normal distribution and attempt to discuss its properties in a fairly thorough manner • The multivariate normal distribution is by far the most important multivariate distribution in statistics • It’s important for all the reasons that the one-dimensional Gaussian distribution is important, but even more so in higher dimensions because many distributions that are useful in one dimension do not easily extend to the multivariate case Patrick Breheny University of Iowa Likelihood Theory (BIOS 7110) 2 / 31 Multivariate normal distribution Linear algebra background Linear combinations and quadratic forms Definition Marginal and conditional distributions Density and MGF Inverse • Before we get to the multivariate normal distribution, let’s review some important results from linear algebra that we will use throughout the course, starting with inverses • Definition: The inverse of an n × n matrix A, denoted A−1, −1 −1 is the matrix satisfying AA = A A = In, where In is the n × n identity matrix. • Note: We’re sort of getting ahead of ourselves by saying that −1 −1 A is “the” matrix satisfying
    [Show full text]
  • 1. Introduction
    Pr´e-Publica¸c~oesdo Departamento de Matem´atica Universidade de Coimbra Preprint Number 17{07 ON SKEW-SYMMETRIC MATRICES RELATED TO THE VECTOR CROSS PRODUCT IN R7 P. D. BEITES, A. P. NICOLAS´ AND JOSE´ VITORIA´ Abstract: A study of real skew-symmetric matrices of orders 7 and 8, defined through the vector cross product in R7, is presented. More concretely, results on matrix properties, eigenvalues, (generalized) inverses and rotation matrices are established. Keywords: vector cross product, skew-symmetric matrix, matrix properties, eigen- values, (generalized) inverses, rotation matrices. Math. Subject Classification (2010): 15A72, 15B57, 15A18, 15A09, 15B10. 1. Introduction A classical result known as the generalized Hurwitz Theorem asserts that, over a field of characteristic different from 2, if A is a finite dimensional composition algebra with identity, then its dimension is equal to 1; 2; 4 or 8. Furthermore, A is isomorphic either to the base field, a separable quadratic extension of the base field, a generalized quaternion algebra or a generalized octonion algebra, [8]. A consequence of the cited theorem is that the values of n for which the Euclidean spaces Rn can be equippped with a binary vector cross product, satisfying the same requirements as the usual one in R3, are restricted to 1 (trivial case), 3 and 7. A complete account on the existence of r-fold vector cross products for d-dimensional vector spaces, where they are used to construct exceptional Lie superalgebras, is in [3]. The interest in octonions, seemingly forgotten for some time, resurged in the last decades, not only for their intrinsic mathematical relevance but also because of their applications, as well as those of the vector cross product in R7.
    [Show full text]
  • Problems and Theorems in Linear Algebra
    1 F. - \ •• L * ."» N • •> F MATHEMATICAL MONOGRAPHS VOLUME 134 V.V. Prasolov Problems and Theorems in Linear Algebra American Mathematical Society CONTENTS Preface xv Main notations and conventions xvii Chapter I. Determinants 1 Historical remarks: Leibniz and Seki Kova. Cramer, L'Hospital, Cauchy and Jacobi 1. Basic properties of determinants 1 The Vandermonde determinant and its application. The Cauchy determinant. Continued frac- tions and the determinant of a tridiagonal matrix. Certain other determinants. Problems 2. Minors and cofactors 9 Binet-Cauchy's formula. Laplace's theorem. Jacobi's theorem on minors of the adjoint matrix. The generalized Sylvester's identity. Chebotarev's theorem on the matrix ||£'7||{'~ , where e = Problems 3. The Schur complement 16 GivsnA = ( n n ),thematrix(i4|/4u) = A22-A21A7.1 A \2 is called the Schur complement \A2l An) (of A\\ mA). 3.1. det A = det An det (A\An). 3.2. Theorem. (A\B) = ((A\C)\(B\C)). Problems 4. Symmetric functions, sums x\ + • • • + x%, and Bernoulli numbers 19 Determinant relations between <Tjt(xi,... ,xn),sk(x\,.. .,xn) = **+•• •-\-x*and.pk{x\,... ,xn) = n J2 x'^.x'n". A determinant formula for Sn (k) = 1" H \-(k-\) . The Bernoulli numbers +4 4.4. Theorem. Let u = S\(x) andv = S2W. Then for k > 1 there exist polynomials p^ and q^ 2 such that S2k+\(x) = u pk(u) andS2k{x) = vqk(u). Problems Solutions Chapter II. Linear spaces 35 Historical remarks: Hamilton and Grassmann 5. The dual space. The orthogonal complement 37 Linear equations and their application to the following theorem: viii CONTENTS 5.4.3.
    [Show full text]
  • Manifestations of the Schur Complement* in Recent Years, the Designation “Schur Complement” Has Been Applied to Any Matrix O
    LINEAR ALGEBRA AND ITS APPLICATIONS 8, 189-211 (1974) 189 Manifestations of the Schur Complement* RICHARD W. COTTLE Stanford University Stanford, California Recommended by Gene Golub ABSTRACT This expository paper describes the ways in which a matrix theoretic construct called the Schur complement arises. Properties of the Schur complement are shown to have use in computing inertias of matrices, covariance matrices of conditional distributions, and other information of interest. 1. INTRODUCTION In recent years, the designation “Schur complement” has been applied to any matrix of the form D - CA-lB. These objects have undoubtedly been encountered from the time matrices were first used. But today under this new name and with new emphasis on their properties, there is greater awareness of the widespread appearance and utility of Schur complements. The purpose of this paper is to highlight some of the many ways that Schur complements arise and to illustrate how their properties assist one in efficiently computing inertias of matrices, covariance matrices of conditional distributions, and other important information. Why Schur ? Complement of what ? The full definition, introduced by E. V. Haynsworth [15], answers the second question. Over any field, * Research partially supported by the Office of Naval Research under Contract N-00014-67-A-0012-0011; U.S. Atomic Energy Commission Contract AT(04-3)-326 PA #18; and National Science Foundation, Grant GP 31393X. 0 American Elsevier Publishing Company, Inc., 1974 190 RICHARD W. COTTLE if A is a nonsingular leading submatrixi of the block matrix A B M= 1C D,1 then D - CA-lB is the Schur complement of A in M and is denoted by (M/A).
    [Show full text]
  • Mmono134-Endmatter.Pdf
    Selected Titles in This Series 151 S . Yu . Slavyanov , Asymptotic solutions o f the one-dimensional Schrodinge r equation , 199 6 150 B . Ya. Levin , Lectures on entire functions, 199 6 149 Takash i Sakai, Riemannian geometry , 199 6 148 Vladimi r I. Piterbarg, Asymptotic methods i n the theory o f Gaussian processe s and fields, 1996 147 S . G . Gindiki n an d L. R. Volevich , Mixed problem fo r partia l differentia l equation s with quasihomogeneous principa l part, 199 6 146 L . Ya. Adrianova , Introduction t o linear system s of differential equations , 199 5 145 A . N. Andrianov and V. G. Zhuravlev , Modular form s an d Heck e operators, 199 5 144 O . V. Troshkin, Nontraditional method s in mathematical hydrodynamics , 199 5 143 V . A. Malyshev an d R. A. Minlos, Linear infinite-particl e operators , 199 5 142 N . V . Krylov, Introduction t o the theory o f diffusio n processes , 199 5 141 A . A. Davydov, Qualitative theor y o f control systems , 199 4 140 Aizi k I. Volpert, Vitaly A. Volpert, an d Vladimir A. Volpert, Traveling wav e solutions o f parabolic systems, 199 4 139 I . V . Skrypnik, Methods for analysi s o f nonlinear ellipti c boundary valu e problems, 199 4 138 Yu . P. Razmyslov, Identities o f algebras and their representations, 199 4 137 F . I. Karpelevich and A. Ya. Kreinin , Heavy traffic limit s for multiphas e queues, 199 4 136 Masayosh i Miyanishi, Algebraic geometry, 199 4 135 Masar u Takeuchi, Modern spherica l functions, 199 4 134 V .
    [Show full text]
  • Algebraic Properties of Manin Matrices 1
    ITEP-TH-43/08 Algebraic properties of Manin matrices 1. A. Chervov 1 G. Falqui 2 V. Rubtsov 3 1 Institute for Theoretical and Experimental Physics, Moscow - Russia 2Universit´adi Milano - Bicocca, Milano - Italy 3Universit´eD’Angers, Angers, France Abstract We study a class of matrices with noncommutative entries, which were first considered by Yu. I. Manin in 1988 in relation with quantum group theory. They are defined as “noncommutative endomorphisms” of a polynomial algebra. More explicitly their defining conditions read: 1) elements in the same column commute; 2) commutators of the cross terms are equal: [Mij, Mkl] = [Mkj, Mil] (e.g. [M11, M22] = [M21, M12]). The basic claim is that despite noncommutativity many theorems of linear algebra hold true for Manin matrices in a form identical to that of the commutative case. Moreover in some examples the converse is also true, that is, Manin matrices are the most general class of matrices such that linear algebra holds true for them. The present paper gives a complete list and detailed proofs of algebraic properties of Manin matrices known up to the moment; many of them are new. In particular we present the formulation of Manin matrices in terms of matrix (Leningrad) notations; provide complete proofs that an inverse to a Manin matrix is again a Manin matrix and for the Schur formula for the determinant of a block matrix; we generalize the noncommutative Cauchy-Binet formulas discovered recently [arXiv:0809.3516], which includes the classical Capelli and related identities. We also discuss many other properties, such as the Cramer formula for the inverse matrix, the Cayley-Hamilton theorem, Newton and MacMahon-Wronski identities, Pl¨ucker relations, Sylvester’s theorem, the Lagrange-Desnanot-Lewis Caroll formula, the Weinstein-Aronszajn formula, arXiv:0901.0235v1 [math.QA] 2 Jan 2009 some multiplicativity properties for the determinant, relations with quasideterminants, calculation of the determinant via Gauss decomposition, conjugation to the second normal (Frobenius) form, and so on and so forth.
    [Show full text]
  • Probability and Mathematical Physics Vol. 1 (2020)
    PROBABILITY and MATHEMATICAL PHYSICS 1:1 2020 msp PROBABILITY and MATHEMATICAL PHYSICS msp.org/pmp EDITORS-IN-CHIEF Alexei Borodin Massachusetts Institute of Technology, United States Hugo Duminil-Copin IHÉS, France & Université de Genève, Switzerland Robert Seiringer Institute of Science and Technology, Austria Sylvia Serfaty Courant Institute, New York University, United States EDITORIAL BOARD Nalini Anantharaman Université de Strasbourg, France Scott Armstrong Courant Institute, New York University, United States Roland Bauerschmidt University of Cambridge, UK Ivan Corwin Columbia University, United States Mihalis Dafermos Princeton University, United States Semyon Dyatlov University of California Berkeley, United States Yan Fyodorov King’s College London, UK Christophe Garban Université Claude Bernard Lyon 1, France Alessandro Giuliani Università degli studi Roma Tre, Italy Alice Guionnet École normale supérieure de Lyon & CNRS, France Pierre-Emmanuel Jabin Pennsylvania State University, United States Mathieu Lewin Université Paris Dauphine & CNRS, France Eyal Lubetzky Courant Institute, New York University, United States Jean-Christophe Mourrat Courant Institute, New York University, United States Laure Saint Raymond École normale supérieure de Lyon & CNRS, France Benjamin Schlein Universität Zürich, Switzerland Vlad Vicol Courant Institute, New York University, United States Simone Warzel Technische Universität München, Germany PRODUCTION Silvio Levy (Scientific Editor) [email protected] See inside back cover or msp.org/pmp for submission instructions. Probability and Mathematical Physics (ISSN 2690-1005 electronic, 2690-0998 printed) at Mathematical Sciences Publishers, 798 Evans Hall #3840, c/o University of California, Berkeley, CA 94720-3840 is published continuously online. Periodical rate postage paid at Berkeley, CA 94704, and additional mailing offices. PMP peer review and production are managed by EditFlow® from MSP.
    [Show full text]
  • THE SCHUR COMPLEMENT and ITS APPLICATIONS Numerical Methods and Algorithms
    THE SCHUR COMPLEMENT AND ITS APPLICATIONS Numerical Methods and Algorithms VOLUME 4 Series Editor: Claude Brezinski Universite des Sciences et Technologies de Lille, France THE SCHUR COMPLEMENT AND ITS APPLICATIONS Edited by FUZHEN ZHANG Nova Southeastern University, Fort Lauderdale, U.S.A. Shenyang Normal University, Shenyang, China Sprringei r Library of Congress Cataloging-in-Publication Data A C.I.P. record for this book is available from the Library of Congress. ISBN 0-387-24271-6 e-ISBN 0-387-24273-2 Printed on acid-free paper. © 2005 Springer Science+Business Media, Inc. All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, Inc., 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now know or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks and similar terms, even if the are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed in the United States of America. 987654321 SPIN 11161356 springeronline.com To our families, friends, and the matrix community Issai Schur (1875-1941) This portrait of Issai Schur was apparently made by the "Atelieir Hanni Schwarz, N. W. Dorotheenstrafie 73" in Berlin, c. 1917, and appears in Ausgewdhlte Arbeiten zu den Ursprilngen der Schur-Analysis: Gewidmet dem grofien Mathematiker Issai Schur (1875-1941) edited by Bernd Fritzsche & Bernd Kirstein, pub.
    [Show full text]
  • Partitioned Matrices and the Schur Complement
    P Partitioned Matrices and the Schur Complement P–1 Appendix P: PARTITIONED MATRICES AND THE SCHUR COMPLEMENT TABLE OF CONTENTS Page §P.1. Partitioned Matrix P–3 §P.2. Schur Complements P–3 §P.3. Block Diagonalization P–3 §P.4. Determinant Formulas P–3 §P.5. Partitioned Inversion P–4 §P.6. Solution of Partitioned Linear System P–4 §P.7. Rank Additivity P–5 §P.8. Inertia Additivity P–5 §P.9. The Quotient Property P–6 §P.10. Generalized Schur Complements P–6 §P. Notes and Bibliography...................... P–6 P–2 §P.4 DETERMINANT FORMULAS Partitioned matrices often appear in the exposition of Finite Element Methods. This Appendix collects some basic material on the subject. Emphasis is placed on results involving the Schur complement. This is the name given in the linear algebra literature to matrix objects obtained through the condensation (partial elimination) process discussed in Chapter 10. §P.1. Partitioned Matrix Suppose that the square matrix M dimensioned (n+m) × (n+m), is partitioned into four submatrix blocks as A B × × M = n n n m (P.1) (n+m)×(n+m) C D m×n m×m The dimensions of A, B, C and D are as shown in the block display. A and D are square matrices, but B and C are not square unless n = m. Entries are generally assumed to be complex. §P.2. Schur Complements If A is nonsingular, the Schur complement of M with respect to A is defined as M/A def= D − CA−1 B.(P.2) If D is nonsingular, the Schur complement of M with respect to D is defined as M/D def= A − BD−1 C.(P.3) Matrices (P.2) and (P.3) are also called the Schur complement of A in M and the Schur complement of D in M, respectively.
    [Show full text]