Applying Schur Complements for Handling General Updates of a Large, Sparse, Unsymmetric Matrix*

Total Page:16

File Type:pdf, Size:1020Kb

Applying Schur Complements for Handling General Updates of a Large, Sparse, Unsymmetric Matrix* Applying Schur Complements for Handling General Up dates of a Large, Sparse, Unsymmetric Matrix Jacek Gondzio Systems Research Institute, Polish Academy of Sciences, Newelska 6, 01-447 Warsaw, Poland e-mail: [email protected] Technical Rep ort ZTSW-2-G244/93 September 1993, revised April 30, 1995 Abstract We describ e a set of pro cedures for computing and up dating an inverse representation of a large and sparse unsymmetric matrix A. The representation is built of two matrices: an easily invertible, large and sparse matrix A and a dense Schur complement matrix S . An 0 ecient heuristic is given that nds this representation for any matrix A and keeps the size of S as small as p ossible. Equations with A are replaced with a sequence of equations that involve matrices A and 0 S . The former take full advantage of the sparsity of A ; the latter b ene t from applying 0 dense linear algebra techniques to a dense representation of the inverse of S . We showhow to manage ve general up dates of A: row or column replacement, row and column addition or deletion and a rank one correction. They all maintain an easily invertible form of A and reduce to some low rank up date of matrix S . 0 An exp erimental implementation of the approach is describ ed and the preliminary com- putational results of applying it to handling working basis up dates that arise in linear pro- gramming are given. Key words: large sparse unsymmetric matrix, inverse representation, Schur com- plement, general up dates. Supp orted by the Committe for Scienti c ResearchofPoland, grant No PB 8 S505 015 05. 1 1 Intro duction In this pap er we describ e a metho d for the inverse representation of a large and sparse unsym- metric matrix A. The representation is exp ected to b e used to solve equations T Ax = b and A y = d; 1 n where x; b; y and d are vectors from R . We present a metho d for up dating this representation after the following mo di cations of A: {row replacement, { column replacement, {row and column addition, {row and column deletion, { rank one change. LU decomp osition cf. Du et al. [5 ], Chapter 8 is probably the most widely used metho d for representing an inverse of a sparse unsymmetric matrix A. Simplicity and eciency of its up date after column replacement is one of its main advantages. The up dates by the metho d of Bartels and Golub [1] preserve sparsity of LU factors and ensure accuracy of the representation that is sucient for most practical applications. There exist many ecient implementations of this metho d [4 , 29 , 32 ], including an extension of Gill et al. [13 ] for di erent up dates of a general rectangular A. The Bartels-Golub up dates of sparse LU factors have one basic disadvantage: they are achieved by a purely sequential algorithm. Hence, relatively little can be done to sp ecialize them for new computer architectures. We feel motivated to lo ok for a paral lelisable metho d for the inverse representation of a sparse unsymmetric matrix A. Additionally, in order to cover a wider eld of applications we would like to b e able to up date this representation inexp ensively and in a stable way after simple mo di cations of A. The metho d presented in this pap er satis es these requirements. We prop ose to represent the inverse of A with two matrices: an easily invertible fundamental basis A and a Schur complement S hop efuly, of small size. The fundamental basis is an easily 0 invertible matrix close to A sub ject to the row and column p ermutations. The \closeness" is measured with the number of columns of A that have to be replaced with appropriate unit vectors in order to get a nonsingular matrix p ermutable to A . The numb er of columns that do 0 not t the required structure determines the size of the Schur complement. It is in our interest to keep this numb er as small as p ossible. Many di erent forms of A might be chosen. The most suitable one would clearly dep end 0 on the particular structure of A. The rst suggestion is to let A be triangular. Although it 0 seems very restrictive, this simple form already covers a large class of matrices, as e.g., linear programming bases arising in network or sto chastic optimization [20 ]. The structure inherent to these problems implies that all bases are \nearly triangular". 2 As shown in [17 ], the general linear programming bases can always be p ermuted to a form close to a blo ck triangular matrix with at most 2x2 irreducible blo cks. This particular form of A ensures that equations with it can b e solved very eciently since the sparsityof A as well 0 0 as that of the right hand side vectors can b e fully exploited. It was also shown in the ab ovementioned pap er that the Schur complement inverse repre- 1 sentation of A can b e up dated in a stable and ecientway after column replacementinA a usual mo di cation of the linear programming bases. In this pap er we extend the set of available up dates of this representation with four new mo di cations to A: row replacement, row and column addition or deletion and a rank one up date. We show that in all cases the up date resolves to one or at most two rank one corrections to the Schur complement. Those, in turn, can b e handled in a stable way when dense LU factorization 1 [16 ] is applied to represent S . The motivation for the development of the metho d presented in this pap er was a need for a reliable inverse representation of the working basis in a new approach to the solution of large linear optimization problems [19 ]. Hence, our illustrative test examples will come from this particular application. However, wewant to stress that the metho d presented can b e used in a much wider context whenever a sequence of equations with large and sparse unsymmetric matrix A is to b e solved and some rows/columns of it are supp osed to b e replaced, deleted or b ordered to A. Let us mention that Schur complements are widely used in general engineering computations [3 , 22 ] as well as in the optimization applications [12 ]. Their use for handling the inverse represen- tation of linear programming bases has b een addressed by many authors [2, 6 , 9 ] indep endently of the author's own interest [17 , 20 ]. Some features of the metho d presented in this pap er make it a particularly attractive al- ternative to the approach that uses Bartels-Golub up dates of the sparse LU factorization [13 ]. The most imp ortant one is splitting the factorization step into \sparse" and \dense" parts. Fundamental basis A shares the structure of A, hence it is supp osed to be large and sparse. 0 All equations with A will take full advantage of this fact. In contrast, the Schur complement 0 matrix S is exp ected to b e small and lled suciently to justify the use of dense factorization to represent its inverse. The use of explicit LU factors of S signi cantly simpli es the control of accuracy: sparsity do es not in uence the choice of the most stable elementary op erator. An- other imp ortant feature of the approach presented in this pap er is that its factorization step is structurally parallelisable. Finally, we underline a simplicity and low cost of handling a rank one up date of A. The pap er is organized as follows. In Section 2, the factorization step is discussed in detail. In particular, we address the heuristics for p ermuting a given matrix to a bordered triangular form, their extensions to nd a bordered block triangular form with 2x2 pivots on the diagonal, and 1 the issue of accuracy control in the A representation. In Section 3, the techniques of up dating the inverse of A after matrix mo di cations mentioned earlier are describ ed. Additionally, we 3 address the issues of complexity and eciency of the up dates. In Section 4, we discuss the well 1 known con ict that app ears in sparse matrix computations b etween preserving a sparsityof A representation and ensuring its accuracy. In Section 5, we give some preliminary computational results obtained with an exp erimental implementation of the metho d prop osed. Final remarks and conclusions are the sub ject of Section 6. 2 Factorization 2.1 Fundamentals Let A be an n n matrix, call it a fundamental basis, that di ers from A with k columns. 0 k n These di erent columns are determined by the partitioning matrix V 2R that is built of k unit vectors, each matching one column removed from A. Sub ject to column p ermutation, A can b e expressed in the form [2 ] A =[A jC ] ; 2 1 nnk where A 2R contains columns that b elong to A and A while C contains columns of 1 0 A that are not in A . Hence 0 T T AV = C and A = A +C A V V: 3 0 0 Let us intro duce an augmented matrix " A C 0 A = : 4 aug V 0 The pivoting op eration done on the whole blo ck A reduces A to 0 aug " 1 I A C n 0 5 0 S where S denotes the Schur complement [3 , 22 ] 1 S = VA C: 6 0 nn Lemma 1.
Recommended publications
  • PSEUDO SCHUR COMPLEMENTS, PSEUDO PRINCIPAL PIVOT TRANSFORMS and THEIR INHERITANCE PROPERTIES∗ 1. Introduction. Let M ∈ R M×
    Electronic Journal of Linear Algebra ISSN 1081-3810 A publication of the International Linear Algebra Society Volume 30, pp. 455-477, August 2015 ELA PSEUDO SCHUR COMPLEMENTS, PSEUDO PRINCIPAL PIVOT TRANSFORMS AND THEIR INHERITANCE PROPERTIES∗ K. BISHT† , G. RAVINDRAN‡, AND K.C. SIVAKUMAR§ Abstract. Extensions of the Schur complement and the principal pivot transform, where the usual inverses are replaced by the Moore-Penrose inverse, are revisited. These are called the pseudo Schur complement and the pseudo principal pivot transform, respectively. First, a generalization of the characterization of a block matrix to be an M-matrix is extended to the nonnegativity of the Moore-Penrose inverse. A comprehensive treatment of the fundamental properties of the extended notion of the principal pivot transform is presented. Inheritance properties with respect to certain matrix classes are derived, thereby generalizing some of the existing results. Finally, a thorough discussion on the preservation of left eigenspaces by the pseudo principal pivot transformation is presented. Key words. Principal pivot transform, Schur complement, Nonnegative Moore-Penrose inverse, P†-Matrix, R†-Matrix, Left eigenspace, Inheritance properties. AMS subject classifications. 15A09, 15A18, 15B48. 1. Introduction. Let M ∈ Rm×n be a block matrix partitioned as A B , C D where A ∈ Rk×k is nonsingular. Then the classical Schur complement of A in M denoted by M/A is given by D − CA−1B ∈ R(m−k)×(n−k). This notion has proved to be a fundamental idea in many applications like numerical analysis, statistics and operator inequalities, to name a few. We refer the reader to [27] for a comprehen- sive account of the Schur complement.
    [Show full text]
  • Notes on the Schur Complement
    University of Pennsylvania ScholarlyCommons Departmental Papers (CIS) Department of Computer & Information Science 12-10-2010 Notes on the Schur Complement Jean H. Gallier University of Pennsylvania, [email protected] Follow this and additional works at: https://repository.upenn.edu/cis_papers Part of the Computer Sciences Commons Recommended Citation Jean H. Gallier, "Notes on the Schur Complement", . December 2010. Gallier, J., Notes on the Schur Complement This paper is posted at ScholarlyCommons. https://repository.upenn.edu/cis_papers/601 For more information, please contact [email protected]. Notes on the Schur Complement Disciplines Computer Sciences Comments Gallier, J., Notes on the Schur Complement This working paper is available at ScholarlyCommons: https://repository.upenn.edu/cis_papers/601 The Schur Complement and Symmetric Positive Semidefinite (and Definite) Matrices Jean Gallier December 10, 2010 1 Schur Complements In this note, we provide some details and proofs of some results from Appendix A.5 (especially Section A.5.5) of Convex Optimization by Boyd and Vandenberghe [1]. Let M be an n × n matrix written a as 2 × 2 block matrix AB M = ; CD where A is a p × p matrix and D is a q × q matrix, with n = p + q (so, B is a p × q matrix and C is a q × p matrix). We can try to solve the linear system AB x c = ; CD y d that is Ax + By = c Cx + Dy = d; by mimicking Gaussian elimination, that is, assuming that D is invertible, we first solve for y getting y = D−1(d − Cx) and after substituting this expression for y in the first equation, we get Ax + B(D−1(d − Cx)) = c; that is, (A − BD−1C)x = c − BD−1d: 1 If the matrix A − BD−1C is invertible, then we obtain the solution to our system x = (A − BD−1C)−1(c − BD−1d) y = D−1(d − C(A − BD−1C)−1(c − BD−1d)): The matrix, A − BD−1C, is called the Schur Complement of D in M.
    [Show full text]
  • The Schur Complement and H-Matrix Theory
    UNIVERSITY OF NOVI SAD FACULTY OF TECHNICAL SCIENCES Maja Nedović THE SCHUR COMPLEMENT AND H-MATRIX THEORY DOCTORAL DISSERTATION Novi Sad, 2016 УНИВЕРЗИТЕТ У НОВОМ САДУ ФАКУЛТЕТ ТЕХНИЧКИХ НАУКА 21000 НОВИ САД, Трг Доситеја Обрадовића 6 КЉУЧНА ДОКУМЕНТАЦИЈСКА ИНФОРМАЦИЈА Accession number, ANO: Identification number, INO: Document type, DT: Monographic publication Type of record, TR: Textual printed material Contents code, CC: PhD thesis Author, AU: Maja Nedović Mentor, MN: Professor Ljiljana Cvetković, PhD Title, TI: The Schur complement and H-matrix theory Language of text, LT: English Language of abstract, LA: Serbian, English Country of publication, CP: Republic of Serbia Locality of publication, LP: Province of Vojvodina Publication year, PY: 2016 Publisher, PB: Author’s reprint Publication place, PP: Novi Sad, Faculty of Technical Sciences, Trg Dositeja Obradovića 6 Physical description, PD: 6/161/85/1/0/12/0 (chapters/pages/ref./tables/pictures/graphs/appendixes) Scientific field, SF: Applied Mathematics Scientific discipline, SD: Numerical Mathematics Subject/Key words, S/KW: H-matrices, Schur complement, Eigenvalue localization, Maximum norm bounds for the inverse matrix, Closure of matrix classes UC Holding data, HD: Library of the Faculty of Technical Sciences, Trg Dositeja Obradovića 6, Novi Sad Note, N: Abstract, AB: This thesis studies subclasses of the class of H-matrices and their applications, with emphasis on the investigation of the Schur complement properties. The contributions of the thesis are new nonsingularity results, bounds for the maximum norm of the inverse matrix, closure properties of some matrix classes under taking Schur complements, as well as results on localization and separation of the eigenvalues of the Schur complement based on the entries of the original matrix.
    [Show full text]
  • The Multivariate Normal Distribution
    Multivariate normal distribution Linear combinations and quadratic forms Marginal and conditional distributions The multivariate normal distribution Patrick Breheny September 2 Patrick Breheny University of Iowa Likelihood Theory (BIOS 7110) 1 / 31 Multivariate normal distribution Linear algebra background Linear combinations and quadratic forms Definition Marginal and conditional distributions Density and MGF Introduction • Today we will introduce the multivariate normal distribution and attempt to discuss its properties in a fairly thorough manner • The multivariate normal distribution is by far the most important multivariate distribution in statistics • It’s important for all the reasons that the one-dimensional Gaussian distribution is important, but even more so in higher dimensions because many distributions that are useful in one dimension do not easily extend to the multivariate case Patrick Breheny University of Iowa Likelihood Theory (BIOS 7110) 2 / 31 Multivariate normal distribution Linear algebra background Linear combinations and quadratic forms Definition Marginal and conditional distributions Density and MGF Inverse • Before we get to the multivariate normal distribution, let’s review some important results from linear algebra that we will use throughout the course, starting with inverses • Definition: The inverse of an n × n matrix A, denoted A−1, −1 −1 is the matrix satisfying AA = A A = In, where In is the n × n identity matrix. • Note: We’re sort of getting ahead of ourselves by saying that −1 −1 A is “the” matrix satisfying
    [Show full text]
  • 1. Introduction
    Pr´e-Publica¸c~oesdo Departamento de Matem´atica Universidade de Coimbra Preprint Number 17{07 ON SKEW-SYMMETRIC MATRICES RELATED TO THE VECTOR CROSS PRODUCT IN R7 P. D. BEITES, A. P. NICOLAS´ AND JOSE´ VITORIA´ Abstract: A study of real skew-symmetric matrices of orders 7 and 8, defined through the vector cross product in R7, is presented. More concretely, results on matrix properties, eigenvalues, (generalized) inverses and rotation matrices are established. Keywords: vector cross product, skew-symmetric matrix, matrix properties, eigen- values, (generalized) inverses, rotation matrices. Math. Subject Classification (2010): 15A72, 15B57, 15A18, 15A09, 15B10. 1. Introduction A classical result known as the generalized Hurwitz Theorem asserts that, over a field of characteristic different from 2, if A is a finite dimensional composition algebra with identity, then its dimension is equal to 1; 2; 4 or 8. Furthermore, A is isomorphic either to the base field, a separable quadratic extension of the base field, a generalized quaternion algebra or a generalized octonion algebra, [8]. A consequence of the cited theorem is that the values of n for which the Euclidean spaces Rn can be equippped with a binary vector cross product, satisfying the same requirements as the usual one in R3, are restricted to 1 (trivial case), 3 and 7. A complete account on the existence of r-fold vector cross products for d-dimensional vector spaces, where they are used to construct exceptional Lie superalgebras, is in [3]. The interest in octonions, seemingly forgotten for some time, resurged in the last decades, not only for their intrinsic mathematical relevance but also because of their applications, as well as those of the vector cross product in R7.
    [Show full text]
  • Problems and Theorems in Linear Algebra
    1 F. - \ •• L * ."» N • •> F MATHEMATICAL MONOGRAPHS VOLUME 134 V.V. Prasolov Problems and Theorems in Linear Algebra American Mathematical Society CONTENTS Preface xv Main notations and conventions xvii Chapter I. Determinants 1 Historical remarks: Leibniz and Seki Kova. Cramer, L'Hospital, Cauchy and Jacobi 1. Basic properties of determinants 1 The Vandermonde determinant and its application. The Cauchy determinant. Continued frac- tions and the determinant of a tridiagonal matrix. Certain other determinants. Problems 2. Minors and cofactors 9 Binet-Cauchy's formula. Laplace's theorem. Jacobi's theorem on minors of the adjoint matrix. The generalized Sylvester's identity. Chebotarev's theorem on the matrix ||£'7||{'~ , where e = Problems 3. The Schur complement 16 GivsnA = ( n n ),thematrix(i4|/4u) = A22-A21A7.1 A \2 is called the Schur complement \A2l An) (of A\\ mA). 3.1. det A = det An det (A\An). 3.2. Theorem. (A\B) = ((A\C)\(B\C)). Problems 4. Symmetric functions, sums x\ + • • • + x%, and Bernoulli numbers 19 Determinant relations between <Tjt(xi,... ,xn),sk(x\,.. .,xn) = **+•• •-\-x*and.pk{x\,... ,xn) = n J2 x'^.x'n". A determinant formula for Sn (k) = 1" H \-(k-\) . The Bernoulli numbers +4 4.4. Theorem. Let u = S\(x) andv = S2W. Then for k > 1 there exist polynomials p^ and q^ 2 such that S2k+\(x) = u pk(u) andS2k{x) = vqk(u). Problems Solutions Chapter II. Linear spaces 35 Historical remarks: Hamilton and Grassmann 5. The dual space. The orthogonal complement 37 Linear equations and their application to the following theorem: viii CONTENTS 5.4.3.
    [Show full text]
  • Manifestations of the Schur Complement* in Recent Years, the Designation “Schur Complement” Has Been Applied to Any Matrix O
    LINEAR ALGEBRA AND ITS APPLICATIONS 8, 189-211 (1974) 189 Manifestations of the Schur Complement* RICHARD W. COTTLE Stanford University Stanford, California Recommended by Gene Golub ABSTRACT This expository paper describes the ways in which a matrix theoretic construct called the Schur complement arises. Properties of the Schur complement are shown to have use in computing inertias of matrices, covariance matrices of conditional distributions, and other information of interest. 1. INTRODUCTION In recent years, the designation “Schur complement” has been applied to any matrix of the form D - CA-lB. These objects have undoubtedly been encountered from the time matrices were first used. But today under this new name and with new emphasis on their properties, there is greater awareness of the widespread appearance and utility of Schur complements. The purpose of this paper is to highlight some of the many ways that Schur complements arise and to illustrate how their properties assist one in efficiently computing inertias of matrices, covariance matrices of conditional distributions, and other important information. Why Schur ? Complement of what ? The full definition, introduced by E. V. Haynsworth [15], answers the second question. Over any field, * Research partially supported by the Office of Naval Research under Contract N-00014-67-A-0012-0011; U.S. Atomic Energy Commission Contract AT(04-3)-326 PA #18; and National Science Foundation, Grant GP 31393X. 0 American Elsevier Publishing Company, Inc., 1974 190 RICHARD W. COTTLE if A is a nonsingular leading submatrixi of the block matrix A B M= 1C D,1 then D - CA-lB is the Schur complement of A in M and is denoted by (M/A).
    [Show full text]
  • Mmono134-Endmatter.Pdf
    Selected Titles in This Series 151 S . Yu . Slavyanov , Asymptotic solutions o f the one-dimensional Schrodinge r equation , 199 6 150 B . Ya. Levin , Lectures on entire functions, 199 6 149 Takash i Sakai, Riemannian geometry , 199 6 148 Vladimi r I. Piterbarg, Asymptotic methods i n the theory o f Gaussian processe s and fields, 1996 147 S . G . Gindiki n an d L. R. Volevich , Mixed problem fo r partia l differentia l equation s with quasihomogeneous principa l part, 199 6 146 L . Ya. Adrianova , Introduction t o linear system s of differential equations , 199 5 145 A . N. Andrianov and V. G. Zhuravlev , Modular form s an d Heck e operators, 199 5 144 O . V. Troshkin, Nontraditional method s in mathematical hydrodynamics , 199 5 143 V . A. Malyshev an d R. A. Minlos, Linear infinite-particl e operators , 199 5 142 N . V . Krylov, Introduction t o the theory o f diffusio n processes , 199 5 141 A . A. Davydov, Qualitative theor y o f control systems , 199 4 140 Aizi k I. Volpert, Vitaly A. Volpert, an d Vladimir A. Volpert, Traveling wav e solutions o f parabolic systems, 199 4 139 I . V . Skrypnik, Methods for analysi s o f nonlinear ellipti c boundary valu e problems, 199 4 138 Yu . P. Razmyslov, Identities o f algebras and their representations, 199 4 137 F . I. Karpelevich and A. Ya. Kreinin , Heavy traffic limit s for multiphas e queues, 199 4 136 Masayosh i Miyanishi, Algebraic geometry, 199 4 135 Masar u Takeuchi, Modern spherica l functions, 199 4 134 V .
    [Show full text]
  • Algebraic Properties of Manin Matrices 1
    ITEP-TH-43/08 Algebraic properties of Manin matrices 1. A. Chervov 1 G. Falqui 2 V. Rubtsov 3 1 Institute for Theoretical and Experimental Physics, Moscow - Russia 2Universit´adi Milano - Bicocca, Milano - Italy 3Universit´eD’Angers, Angers, France Abstract We study a class of matrices with noncommutative entries, which were first considered by Yu. I. Manin in 1988 in relation with quantum group theory. They are defined as “noncommutative endomorphisms” of a polynomial algebra. More explicitly their defining conditions read: 1) elements in the same column commute; 2) commutators of the cross terms are equal: [Mij, Mkl] = [Mkj, Mil] (e.g. [M11, M22] = [M21, M12]). The basic claim is that despite noncommutativity many theorems of linear algebra hold true for Manin matrices in a form identical to that of the commutative case. Moreover in some examples the converse is also true, that is, Manin matrices are the most general class of matrices such that linear algebra holds true for them. The present paper gives a complete list and detailed proofs of algebraic properties of Manin matrices known up to the moment; many of them are new. In particular we present the formulation of Manin matrices in terms of matrix (Leningrad) notations; provide complete proofs that an inverse to a Manin matrix is again a Manin matrix and for the Schur formula for the determinant of a block matrix; we generalize the noncommutative Cauchy-Binet formulas discovered recently [arXiv:0809.3516], which includes the classical Capelli and related identities. We also discuss many other properties, such as the Cramer formula for the inverse matrix, the Cayley-Hamilton theorem, Newton and MacMahon-Wronski identities, Pl¨ucker relations, Sylvester’s theorem, the Lagrange-Desnanot-Lewis Caroll formula, the Weinstein-Aronszajn formula, arXiv:0901.0235v1 [math.QA] 2 Jan 2009 some multiplicativity properties for the determinant, relations with quasideterminants, calculation of the determinant via Gauss decomposition, conjugation to the second normal (Frobenius) form, and so on and so forth.
    [Show full text]
  • Probability and Mathematical Physics Vol. 1 (2020)
    PROBABILITY and MATHEMATICAL PHYSICS 1:1 2020 msp PROBABILITY and MATHEMATICAL PHYSICS msp.org/pmp EDITORS-IN-CHIEF Alexei Borodin Massachusetts Institute of Technology, United States Hugo Duminil-Copin IHÉS, France & Université de Genève, Switzerland Robert Seiringer Institute of Science and Technology, Austria Sylvia Serfaty Courant Institute, New York University, United States EDITORIAL BOARD Nalini Anantharaman Université de Strasbourg, France Scott Armstrong Courant Institute, New York University, United States Roland Bauerschmidt University of Cambridge, UK Ivan Corwin Columbia University, United States Mihalis Dafermos Princeton University, United States Semyon Dyatlov University of California Berkeley, United States Yan Fyodorov King’s College London, UK Christophe Garban Université Claude Bernard Lyon 1, France Alessandro Giuliani Università degli studi Roma Tre, Italy Alice Guionnet École normale supérieure de Lyon & CNRS, France Pierre-Emmanuel Jabin Pennsylvania State University, United States Mathieu Lewin Université Paris Dauphine & CNRS, France Eyal Lubetzky Courant Institute, New York University, United States Jean-Christophe Mourrat Courant Institute, New York University, United States Laure Saint Raymond École normale supérieure de Lyon & CNRS, France Benjamin Schlein Universität Zürich, Switzerland Vlad Vicol Courant Institute, New York University, United States Simone Warzel Technische Universität München, Germany PRODUCTION Silvio Levy (Scientific Editor) [email protected] See inside back cover or msp.org/pmp for submission instructions. Probability and Mathematical Physics (ISSN 2690-1005 electronic, 2690-0998 printed) at Mathematical Sciences Publishers, 798 Evans Hall #3840, c/o University of California, Berkeley, CA 94720-3840 is published continuously online. Periodical rate postage paid at Berkeley, CA 94704, and additional mailing offices. PMP peer review and production are managed by EditFlow® from MSP.
    [Show full text]
  • THE SCHUR COMPLEMENT and ITS APPLICATIONS Numerical Methods and Algorithms
    THE SCHUR COMPLEMENT AND ITS APPLICATIONS Numerical Methods and Algorithms VOLUME 4 Series Editor: Claude Brezinski Universite des Sciences et Technologies de Lille, France THE SCHUR COMPLEMENT AND ITS APPLICATIONS Edited by FUZHEN ZHANG Nova Southeastern University, Fort Lauderdale, U.S.A. Shenyang Normal University, Shenyang, China Sprringei r Library of Congress Cataloging-in-Publication Data A C.I.P. record for this book is available from the Library of Congress. ISBN 0-387-24271-6 e-ISBN 0-387-24273-2 Printed on acid-free paper. © 2005 Springer Science+Business Media, Inc. All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, Inc., 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now know or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks and similar terms, even if the are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed in the United States of America. 987654321 SPIN 11161356 springeronline.com To our families, friends, and the matrix community Issai Schur (1875-1941) This portrait of Issai Schur was apparently made by the "Atelieir Hanni Schwarz, N. W. Dorotheenstrafie 73" in Berlin, c. 1917, and appears in Ausgewdhlte Arbeiten zu den Ursprilngen der Schur-Analysis: Gewidmet dem grofien Mathematiker Issai Schur (1875-1941) edited by Bernd Fritzsche & Bernd Kirstein, pub.
    [Show full text]
  • Partitioned Matrices and the Schur Complement
    P Partitioned Matrices and the Schur Complement P–1 Appendix P: PARTITIONED MATRICES AND THE SCHUR COMPLEMENT TABLE OF CONTENTS Page §P.1. Partitioned Matrix P–3 §P.2. Schur Complements P–3 §P.3. Block Diagonalization P–3 §P.4. Determinant Formulas P–3 §P.5. Partitioned Inversion P–4 §P.6. Solution of Partitioned Linear System P–4 §P.7. Rank Additivity P–5 §P.8. Inertia Additivity P–5 §P.9. The Quotient Property P–6 §P.10. Generalized Schur Complements P–6 §P. Notes and Bibliography...................... P–6 P–2 §P.4 DETERMINANT FORMULAS Partitioned matrices often appear in the exposition of Finite Element Methods. This Appendix collects some basic material on the subject. Emphasis is placed on results involving the Schur complement. This is the name given in the linear algebra literature to matrix objects obtained through the condensation (partial elimination) process discussed in Chapter 10. §P.1. Partitioned Matrix Suppose that the square matrix M dimensioned (n+m) × (n+m), is partitioned into four submatrix blocks as A B × × M = n n n m (P.1) (n+m)×(n+m) C D m×n m×m The dimensions of A, B, C and D are as shown in the block display. A and D are square matrices, but B and C are not square unless n = m. Entries are generally assumed to be complex. §P.2. Schur Complements If A is nonsingular, the Schur complement of M with respect to A is defined as M/A def= D − CA−1 B.(P.2) If D is nonsingular, the Schur complement of M with respect to D is defined as M/D def= A − BD−1 C.(P.3) Matrices (P.2) and (P.3) are also called the Schur complement of A in M and the Schur complement of D in M, respectively.
    [Show full text]