Generalized Matrix Inversion Is Not Harder Than Matrix Multiplication

Generalized Matrix Inversion Is Not Harder Than Matrix Multiplication

Generalized matrix inversion is not harder than matrix multiplication Marko D. Petkovi¶c,¤ Predrag S. Stanimirovi¶c University of Ni·s,Department of Mathematics, Faculty of Science, Vi·segradska 33, 18000 Ni·s,Serbia E-mail: [email protected], [email protected] Abstract Starting from the Strassen method for rapid matrix multiplication and inversion as well as from the recursive Cholesky factorization algo- rithm, we introduced a completely block recursive algorithm for general- ized Cholesky factorization of a given symmetric, positive semi-de¯nite matrix A 2 Rn£n. We used the Strassen method for matrix inversion to- gether with the recursive generalized Cholesky factorization method, and established an algorithm for computing generalized f2; 3g and f2; 4g in- verses. Introduced algorithms are not harder than the matrix-matrix mul- tiplication. AMS Subj. Class.: 15A09. Key words: Cholesky factorization; Complexity analysis; Generalized in- verses; Moore-Penrose inverse; Strassen Method 1 Introduction m£n The set of all m £ n real matrices of rank r is denoted by Rr . By A(k1;:::;kl) we denote the main diagonal minor of n £ n matrix A corresponding to rows and columns indexed by the indices 1 · k1 < k2 < ¢ ¢ ¢ < kl · n. For any matrix A 2 Rm£n consider the following equations in G: (1) AGA=A; (2) GAG=G; (3) (AG)T =AG; (4) (GA)T =GA where the superscript T denotes transpose matrix. For a sequence S of elements from the set f1; 2; 3; 4g, the set of matrices obeying the equations represented in S is denoted by AfSg. A matrix from AfSg is called an S-inverse of A and denoted by A(S): Subsequently, the Moore- Penrose inverse G = Ay of A is unique and satis¯es the set of the equations (1){(4). ¤Corresponding author 1 2 M.D. Petkovi¶c,P.S. Stanimirovi¶c The sets of f2; 3g, f2; 4g inverses of rank s, 0 < s < r = rank(A) is denoted by Af2; 3gs and Af2; 4gs, as in [4], and de¯ned in the following way: ¤ Af2; 3gs = fXj XAX = X; (AX) = AX; rank(X) = sg; ¤ Af2; 4gs = fXj XAX = X; (XA) = XA; rank(X) = sg: Our starting motivation in the present paper is the following Theorem 28.8 from [7]: matrix inversion is no harder than matrix multiplication. Theorem is stated under the assumptions that we can multiply two n £ n real matrices in time mul(n) = ­(n2), where mul(n) satis¯es the following two regu- larity conditions: mul(n + k) = O(mul(n)) for any k in the range 0 · k · n and mul(n=2) · c¢mul(n) for some constant c < 1. Then the ordinary inverse of any real nonsingular n £ n matrix can be computed in time O(mul(n)). De¯nitions of £(f(n)), ­(f(n)) and O(f(n)) can be found, for example, in [7]. Let A; B be n£n real or complex matrices. The number of scalar operations required for computing the matrix product C = AB by the ordinary method is 2n3¡n2 =O(n3)(n3 multiplications and n3¡n2 additions). In the paper [15], V. Strassen introduced an algorithm for matrix multiplication which complexity is O(nlog2 7) ¼ n2:807 (less than £(n3)). There are other algorithms for computing the product C = AB in time below £(n3). Currently the best one is due to Coppersmith and Winograd [6] and it works in time O(n2:376). Strassen in [15] introduced the algorithm for ¯nding the inverse of given n £ n matrix A with the same complexity as the matrix multiplication. This algorithm is based on the block decomposition of the matrix A and analoguous decomposition of its ordinary inverse. Lemma 1.1. [15] If A is given n £ n matrix partitioned in the following way · ¸ A11 A12 k£k A = ;A11 2 R (1.1) A21 A22 ¡1 and both A and A11 are regular, then the inverse matrix X = A can be represented in well known form of the block matrix inversion [3]: · ¸ · ¸ ¡1 ¡1 ¡1 ¡1 ¡1 ¡1 X11 X12 A11 + A11 A12S A21A11 ¡A11 A12S X = = ¡1 ¡1 ¡1 ; (1.2) X21 X22 ¡S A21A11 S ¡1 where S = A22 ¡ A21A11 A12 = (A=A11) is the Schur complement of A11 in the matrix A. The number of matrix multiplications required in (1.2) to compute blocks 3 X11;X12;X21 and X22 in the block form (1:2) can be decreased below £(n ) Generalized matrix inversion is not harder than matrix multiplication 3 using the temporary matrices R1;:::;R7 and the following relations [15]: ¡1 1:R1 = A11 7:X12 = R3R6 2:R2 = A21R1 8:X21 = R6R2 3:R3 = R1A12 9:R7 = R3X21 (1.3) 4:R4 = A21R3 10:X11 = R1 ¡ R7 5:R5 = R4 ¡ A22 ¡1 11:X22 = ¡R6: 6:R6 = R5 Let us notice that the matrix R5 in the relations (1.3) is equal to the minus Schur complement of A11 in the matrix A, i.e. R5 = ¡(A=A11). Formulas (1.2) and (1.3) are applicable if both A11 and the Schur comple- ment S = (A=A11) are invertible. Our main intention in the present paper is development of an algorithm for rapid computation of f2; 3g and f2; 4g generalized inverses, with complexity which is not greater than the matrix multiplication com- plexity. Representations of f2; 3g and f2; 4g inverses are established in [14] and they are based on the generalized Cholesky decomposition de¯ned in [9] and the usual matrix inversion. Therefore, we are caused to use Strassen algorithm for ma- trix inversion and develop algorithm which computes the generalized Cholesky factorization in the matrix multiplication complexity. In order to accomplish our idea, we organized the paper as in the following. In the second section we state a recursive algorithm for rapid matrix inver- sion, not harder than the matrix multiplication. A new Strassen-type full recursive algorithm for simultaneous fast computa- tion of the Cholesky factorization matrix U satisfying A = U T U, and its inverse Y is introduced in Section 3. The algorithm is applicable to symmetric positive- de¯nite matrix. A generalization of this algorithm to positive semi-de¯nite matrices gives analogous recursive algorithm for the generalized Cholesky de- composition from [9]. Then the matrix Y becomes f1; 2; 3g inverse of U. In the fourth section we combine representations from [14] with e®ective generalized Cholesky decomposition, and developed algorithms for computing the Moore-Penrose and various classes of f2; 3g and f2; 4g generalized inverses. These algorithms are not harder than the matrix multiplication. Algorithms are implemented in the package MATHEMATICA and numerical examples are presented. 2 Strassen matrix inversion method Formulas (1.3) can be used for recursive computation of the matrix inverse A¡1. Relations 1. and 6. in (1.3) use matrix inverses of matrices with smaller 4 M.D. Petkovi¶c,P.S. Stanimirovi¶c dimensions (k £ k and (n ¡ k) £ (n ¡ k) respectively). By applying the same formulas recursively on these submatrices, it is obtained the recursive method for matrix inversion. Recursion can be continued down to the case of 1 £ 1 matrices. The original Strassen matrix inversion algorithm is based on the following two principles: P1. in steps 1. and 6. recursively compute the inverses of smaller dimension matrices, and recursion is continued down to the level 1 £ 1; P2. use the Strassen's matrix-matrix multiplication method to perform all the matrix multiplications (steps 2, 3, 4, 7, 8 and 9). Now we will state a Strassen-type algorithm for matrix inversion, based on the principle P1. Any correct method for matrix multiplication can be used. The matrix multiplication method used determines complexity of the algorithm. Algorithm 2.1. (Strassen-based matrix inversion) Input: Regular n £ n matrix A which all main diagonal minors are regular. ¡1 n Step 1. If n = 1 then return X = [a11 ]. Else decompose matrix A with k = b 2 c as in (1:1) and continue. Step 2. Apply formulas (1:3), where the inverses are computed fully recursively according to the principle P1. Step 3. Return the inverse matrix X = A¡1 as in (1:2). Denote by inv(n) the complexity of Algorithm 2.1. Also denote by add(n) the complexity of the matrix addition on n£n matrices and by mul(m; n; k) the complexity of multiplying m £ n matrix with n £ k matrix, and let mul(n) = mul(n; n; n). Moreover denote by invs(n), adds(n) and muls(m; n; k) corre- sponding storage complexities of Algorithm 2.1, matrix addition on n £ n ma- trices and matrix multiplication of m £ n with n £ k matrix, and let muls(n) = muls(n; n; n). Remark 2.1. If any algorithm for matrix-matrix multiplication with complex- ity O(n2+²) is used, then Algorithm 2.1 also works with complexity O(n2+²), 0 < ² < 1. Especially, if the Strassen's matrix-matrix multiplication algorithm and full recursion is applied, Algorithm 2.1 requires 6 1 nlog2 7 ¡ n ¼ n2:807 5 5 multiplications [2, 12, 15]. Otherwise if the usual matrix-matrix multiplication algorithm with ordinary time complexity O(n3) is used, then complexity of Algorithm 2.1 is O(n3). Proposition 2.1. Storage complexity of Algorithm 2.1 is £(n2). Generalized matrix inversion is not harder than matrix multiplication 5 Proof. Note that the storage complexity of the usual matrix-matrix multiplica- tion algorithm, as well as known methods for matrix multiplication with com- plexity mul(n) = O(n2+²) is equal to £(n2).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    21 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us