Applying Schur Complements for Handling General Updates of a Large, Sparse, Unsymmetric Matrix*
Total Page:16
File Type:pdf, Size:1020Kb
Applying Schur Complements for Handling General Up dates of a Large, Sparse, Unsymmetric Matrix Jacek Gondzio Systems Research Institute, Polish Academy of Sciences, Newelska 6, 01-447 Warsaw, Poland e-mail: [email protected] Technical Rep ort ZTSW-2-G244/93 September 1993, revised April 30, 1995 Abstract We describ e a set of pro cedures for computing and up dating an inverse representation of a large and sparse unsymmetric matrix A. The representation is built of two matrices: an easily invertible, large and sparse matrix A and a dense Schur complement matrix S . An 0 ecient heuristic is given that nds this representation for any matrix A and keeps the size of S as small as p ossible. Equations with A are replaced with a sequence of equations that involve matrices A and 0 S . The former take full advantage of the sparsity of A ; the latter b ene t from applying 0 dense linear algebra techniques to a dense representation of the inverse of S . We showhow to manage ve general up dates of A: row or column replacement, row and column addition or deletion and a rank one correction. They all maintain an easily invertible form of A and reduce to some low rank up date of matrix S . 0 An exp erimental implementation of the approach is describ ed and the preliminary com- putational results of applying it to handling working basis up dates that arise in linear pro- gramming are given. Key words: large sparse unsymmetric matrix, inverse representation, Schur com- plement, general up dates. Supp orted by the Committe for Scienti c ResearchofPoland, grant No PB 8 S505 015 05. 1 1 Intro duction In this pap er we describ e a metho d for the inverse representation of a large and sparse unsym- metric matrix A. The representation is exp ected to b e used to solve equations T Ax = b and A y = d; 1 n where x; b; y and d are vectors from R . We present a metho d for up dating this representation after the following mo di cations of A: {row replacement, { column replacement, {row and column addition, {row and column deletion, { rank one change. LU decomp osition cf. Du et al. [5 ], Chapter 8 is probably the most widely used metho d for representing an inverse of a sparse unsymmetric matrix A. Simplicity and eciency of its up date after column replacement is one of its main advantages. The up dates by the metho d of Bartels and Golub [1] preserve sparsity of LU factors and ensure accuracy of the representation that is sucient for most practical applications. There exist many ecient implementations of this metho d [4 , 29 , 32 ], including an extension of Gill et al. [13 ] for di erent up dates of a general rectangular A. The Bartels-Golub up dates of sparse LU factors have one basic disadvantage: they are achieved by a purely sequential algorithm. Hence, relatively little can be done to sp ecialize them for new computer architectures. We feel motivated to lo ok for a paral lelisable metho d for the inverse representation of a sparse unsymmetric matrix A. Additionally, in order to cover a wider eld of applications we would like to b e able to up date this representation inexp ensively and in a stable way after simple mo di cations of A. The metho d presented in this pap er satis es these requirements. We prop ose to represent the inverse of A with two matrices: an easily invertible fundamental basis A and a Schur complement S hop efuly, of small size. The fundamental basis is an easily 0 invertible matrix close to A sub ject to the row and column p ermutations. The \closeness" is measured with the number of columns of A that have to be replaced with appropriate unit vectors in order to get a nonsingular matrix p ermutable to A . The numb er of columns that do 0 not t the required structure determines the size of the Schur complement. It is in our interest to keep this numb er as small as p ossible. Many di erent forms of A might be chosen. The most suitable one would clearly dep end 0 on the particular structure of A. The rst suggestion is to let A be triangular. Although it 0 seems very restrictive, this simple form already covers a large class of matrices, as e.g., linear programming bases arising in network or sto chastic optimization [20 ]. The structure inherent to these problems implies that all bases are \nearly triangular". 2 As shown in [17 ], the general linear programming bases can always be p ermuted to a form close to a blo ck triangular matrix with at most 2x2 irreducible blo cks. This particular form of A ensures that equations with it can b e solved very eciently since the sparsityof A as well 0 0 as that of the right hand side vectors can b e fully exploited. It was also shown in the ab ovementioned pap er that the Schur complement inverse repre- 1 sentation of A can b e up dated in a stable and ecientway after column replacementinA a usual mo di cation of the linear programming bases. In this pap er we extend the set of available up dates of this representation with four new mo di cations to A: row replacement, row and column addition or deletion and a rank one up date. We show that in all cases the up date resolves to one or at most two rank one corrections to the Schur complement. Those, in turn, can b e handled in a stable way when dense LU factorization 1 [16 ] is applied to represent S . The motivation for the development of the metho d presented in this pap er was a need for a reliable inverse representation of the working basis in a new approach to the solution of large linear optimization problems [19 ]. Hence, our illustrative test examples will come from this particular application. However, wewant to stress that the metho d presented can b e used in a much wider context whenever a sequence of equations with large and sparse unsymmetric matrix A is to b e solved and some rows/columns of it are supp osed to b e replaced, deleted or b ordered to A. Let us mention that Schur complements are widely used in general engineering computations [3 , 22 ] as well as in the optimization applications [12 ]. Their use for handling the inverse represen- tation of linear programming bases has b een addressed by many authors [2, 6 , 9 ] indep endently of the author's own interest [17 , 20 ]. Some features of the metho d presented in this pap er make it a particularly attractive al- ternative to the approach that uses Bartels-Golub up dates of the sparse LU factorization [13 ]. The most imp ortant one is splitting the factorization step into \sparse" and \dense" parts. Fundamental basis A shares the structure of A, hence it is supp osed to be large and sparse. 0 All equations with A will take full advantage of this fact. In contrast, the Schur complement 0 matrix S is exp ected to b e small and lled suciently to justify the use of dense factorization to represent its inverse. The use of explicit LU factors of S signi cantly simpli es the control of accuracy: sparsity do es not in uence the choice of the most stable elementary op erator. An- other imp ortant feature of the approach presented in this pap er is that its factorization step is structurally parallelisable. Finally, we underline a simplicity and low cost of handling a rank one up date of A. The pap er is organized as follows. In Section 2, the factorization step is discussed in detail. In particular, we address the heuristics for p ermuting a given matrix to a bordered triangular form, their extensions to nd a bordered block triangular form with 2x2 pivots on the diagonal, and 1 the issue of accuracy control in the A representation. In Section 3, the techniques of up dating the inverse of A after matrix mo di cations mentioned earlier are describ ed. Additionally, we 3 address the issues of complexity and eciency of the up dates. In Section 4, we discuss the well 1 known con ict that app ears in sparse matrix computations b etween preserving a sparsityof A representation and ensuring its accuracy. In Section 5, we give some preliminary computational results obtained with an exp erimental implementation of the metho d prop osed. Final remarks and conclusions are the sub ject of Section 6. 2 Factorization 2.1 Fundamentals Let A be an n n matrix, call it a fundamental basis, that di ers from A with k columns. 0 k n These di erent columns are determined by the partitioning matrix V 2R that is built of k unit vectors, each matching one column removed from A. Sub ject to column p ermutation, A can b e expressed in the form [2 ] A =[A jC ] ; 2 1 nnk where A 2R contains columns that b elong to A and A while C contains columns of 1 0 A that are not in A . Hence 0 T T AV = C and A = A +C A V V: 3 0 0 Let us intro duce an augmented matrix " A C 0 A = : 4 aug V 0 The pivoting op eration done on the whole blo ck A reduces A to 0 aug " 1 I A C n 0 5 0 S where S denotes the Schur complement [3 , 22 ] 1 S = VA C: 6 0 nn Lemma 1.