Block Recursive Computation of Generalized Inverses∗ 1
Total Page:16
File Type:pdf, Size:1020Kb
Electronic Journal of Linear Algebra ISSN 1081-3810 A publication of the International Linear Algebra Society Volume 26, pp. 394-405, July 2013 ELA BLOCK RECURSIVE COMPUTATION OF GENERALIZED INVERSES∗ MARKO D. PETKOVIC´ † AND PREDRAG S. STANIMIROVIC´ ∗ Abstract. A fully block recursive method for computing outer generalized inverses of given square matrix is introduced. The method is applicable even in the case when some of main diagonal minors of A are singular or A is singular. Computational complexity of the method is not harder than the matrix multiplication, under the assumption that the Strassen matrix inversion algorithm is used. A partially recursive algorithm for computing various classes of generalized inverses is also developed. This method can be efficiently used for the acceleration of the known methods for computing generalized inverses. Key words. Moore-Penrose inverse, Outer inverses, Banachiewicz-Schur form, Strassen method. AMS subject classifications. 15A09. 1. Introduction. Following the usual notations, the set of all m×n real matrices Rm×n of rank r is denoted by r , while the set of all real m × n matrices is denoted by Rm×n. The principal submatrix of n × n matrix A which is composed of rows and columns indexed by 1 ≤ k1 < k2 < ··· < kl ≤ n, 1 ≤ l ≤ n, is denoted by A{k1,...,kl}. For any m × n real matrix the following matrix equations in X are used to define various generalized inverses of A: (1) AXA=A, (2) XAX =X, (3) (AX)T =AX, (4) (XA)T =XA. Also, in the case m = n, the following two additional equations are exploited: (5) AX = XA (1k) Ak+1X = Ak, where (1k) is valid for any positive integer k satisfying k ≥ ind(A) = min{p : rank(Ap+1) = rank(Ap)}. The set of matrices obeying the equations represented in S is denoted by A{S}, for arbitrary sequence S of the elements from {1, 2, 3, 4, 5, 1k}. Any matrix X ∈ A{S} is known as an S-inverse of A and it is denoted by A(S) [4]. The matrix X satisfying equations (1) and (2) is said to be a reflexive g-inverse of A, whereas the matrix X satisfying only the equation (2) is called an outer inverse ∗Received by the editors on May 2, 2012. Accepted for publication on May 26, 2013. Handling Editor: Oscar Maria Baksalary. †University of Niˇs, Department of Computer Science, Faculty of Science and Mathematics, Viˇsegradska 33, 18000 Niˇs, Serbia ([email protected], [email protected]). Authors gratefully acknowledge support from the Research Project 174013 of the Serbian Ministry of Science. 394 Electronic Journal of Linear Algebra ISSN 1081-3810 A publication of the International Linear Algebra Society Volume 26, pp. 394-405, July 2013 ELA Block Recursive Computation of Generalized Inverses 395 of A. Subsequently, the Moore-Penrose inverse X = A† of A satisfies the set of the equations (1), (2), (3) and (4). The Drazin inverse X = AD of A satisfies the equations (1k), (2) and (5). When the subproblems are of the same type as the original problem, the same recursive process can be carried out until the problem size is sufficiently small. This special type of Divide and Conquer (D&C) type of algorithms, is referred to as D&C recursion. For given function T (n), notations Θ(T (n)) and O(T (n)) mean the set of functions defined in the following way (see, for example [8]): Θ(T (n)) = {f(n) | 0 ≤ c1T (n) ≤ f(n) ≤ c2T (n), n ≥ n0, c1,c2,n0 > 0} O(T (n)) = {f(n) | 0 ≤ f(n) ≤ cT (n), n ≥ n0, c,n0 > 0}. Denote by add(n), mul(n) and inv(n) complexities of matrix addition, multipli- cation and inversion on n × n matrices. Also denote the complexity of multiplying m × n matrix with n × k matrix by mul(m,n,k). Particularly, mul(n) = mul(n,n,n). Results concerning block recursive algorithms in linear algebra, based on the LU decomposition, can be found for example in [12, 13, 16, 20, 22]. Our paper [20] deals with the time complexity of block recursive generalized Cholesky factorization and its applications to the generalized inversion. −1 The Schur complement S = (A/A11)= A22 − A21A11 A12 of the block matrix A A (1.1) A = 11 12 ∈ Rn×n, A ∈ Rk×k A A 11 21 22 is a basic tool in computation of the inverse matrix A−1 [3]. The generalized Schur − − complement S = A22 − A21A11A12, where A11 is a generalized inverse of A11, plays an important role in representations of various generalized inverses A− [2, 5, 6, 9, 10, 17, 18, 23, 24]. Although there are many representations of different generalized inverses in Banachievich-Schur form, its computational aspect is not well investigated so far. This problem is investigated in the present paper. Furthermore, we construct ap- propriate Strassen-type algorithm for generalized matrix inversion. The advantage of that algorithm is its computational complexity O(mul(n)), but the drawback is that it produces only outer inverses. For this purpose, we introduce one-step and partially block recursive Strassen-type algorithms. Although their time complexity is O(n3), they are efficient and practically applicable. 2. Strassen method for efficient matrix multiplication and inversion. Let A, B be n × n real or complex matrices. The usual algorithms for computing the Electronic Journal of Linear Algebra ISSN 1081-3810 A publication of the International Linear Algebra Society Volume 26, pp. 394-405, July 2013 ELA 396 M.D. Petkovi´cand P.S. Stanimirovi´c matrix product C = AB require n3 multiplications and n3 − n2 additions (2n3 − n2 = O(n3) floating point operations in total). In the paper [21], V. Strassen introduced an algorithm for matrix multiplication which complexity is O(nlog2 7) ≈ O(n2.807) (less than O(n3)). Some other algorithms for computing the matrix product in a time less than O(n3) are known. Currently the best one is due to Coppersmith and Winograd [7] and works in time O(n2.376). An overview of various efficient matrix multiplication algorithms is presented in [11]. The authors of the paper [11] also improved some of efficient matrix multiplication algorithms in the case of small matrices. Strassen in [21] also introduced the algorithm for finding the inverse of a given matrix A of the form (1.1), with the same complexity as the matrix multiplication. Lemma 2.1. [21] Assume that A is partitioned as in (1.1) and X X (2.1) X = A−1 = 11 12 ∈ Rn×n, X ∈ Rk×k. X X 11 21 22 Matrices X11,X12,X21 and X22 are defined by the following relations: −1 1. R1 = A11 5. R5 = R4 − A22 −1 9. R7 = R3X21 2. R2 = A21R1 6. R6 = R5 (2.2) 10. X11 = R1 − R7 3. R3 = R1A12 7. X12 = R3R6 11. X22 = −R6. 4. R4 = A21R3 8. X21 = R6R2 In the rest of this section, we assume that the matrix A ∈ Rn×n is decomposed as −1 in (1.1). The matrix R5 in the relations (2.2) is equal to R5 = −(A22 −A21A11 A12)= −S = −(A/A11). Intermediate matrices R1,...,R7 are introduced to ensure a min- imal number of matrix multiplications. However, these matrices require additional memory space to be stored. If we eliminate R1,...,R7 from the relations (2.2) we obtain well-known explicit form of the block matrix inversion [3]: A−1 + A−1A S−1A A−1 −A−1A S−1 X = A−1 = 11 11 12 21 11 11 12 . −S−1A A−1 S−1 21 11 Now we state complete Strassen-type algorithm for fast matrix inversion (Algo- rithm 2.2). Algorithm 2.2. Strassen-based matrix inversion. Require: Invertible n × n matrix A which all principal submatrices are invertible. −1 1: If n = 1 then return X = [a11 ]. Else decompose matrix A as in (1.1) and continue. 2: Compute X11, X12, X21 and X22 using formulas (2.2), where the inverses are computed recursively and for matrix-matrix multiplication is used one of the Θ(n2+ǫ) algorithms. Electronic Journal of Linear Algebra ISSN 1081-3810 A publication of the International Linear Algebra Society Volume 26, pp. 394-405, July 2013 ELA Block Recursive Computation of Generalized Inverses 397 3: Return the inverse matrix X = A−1. The parameter k in the decomposition (1.1) can be chosen arbitrarily, but the most frequent choice is k = ⌊n/2⌋. Algorithm 2.2 in Step 2 recursively computes the inverse of principal submatrices and its Schur complements. Also, Algorithm 2.2 assumes that the recursion is continued down to the level 1 × 1. The only situation in which Algorithm 2.2 may crash is Step 1, in the case n = 1 and a11 = 0. It is hard to verify in advance invertibility of both A11 and S in each recursive call of Algorithm 2.2. In the rest of this section we give an equivalent condition which can be directly checked on the input matrix A. Such conditions are not investigated in the papers concerning Algorithm 2.2. n×n Let β,γ ⊆{1, 2,...,n} and A ∈ R . Denote by Aβ,γ a submatrix of A obtained by taking rows of A indexed by β and columns of A indexed by γ. Also denote by c β = {1, 2,...,n}\ β and Aβ = Aβ,β. The following theorem is well-known and its statement can be found, for example in [25] (p. 112, Th. 4.8 and Th. 4.9), restated here as Proposition 2.3: Proposition 2.3. Let A ∈ Rn×n and β ⊆ {1, 2,...,n}.