Block-Tridiagonal Matrices
Total Page:16
File Type:pdf, Size:1020Kb
FMB - NLA Block-tridiagonal matrices . – p.1/31 FMB - NLA Block-tridiagonal matrices - where do these arise? - as a result of a particular mesh-point ordering - as a part of a factorization procedure, for example when we compute the eigenvalues of a matrix. – p.2/31 FMB - NLA Block-tridiagonal matrices Ω1 Ω2 Ω3 Consider a two-dimensional domain partitioned in strips. Assume that points on the lines of intersection are only coupled to their nearest neighbors in the underlying mesh (and we do not have periodic boundary conditions). Hence, there is no coupling between subdomains except through the “glue” on the interfaces. – p.3/31 FMB - NLA Block-tridiagonal matrices When the subdomains are ordered lexicographically from left to right, a domain Ωi becomes coupled only to its pre- and postdecessors Ωi 1 and Ωi+1, respectively and the corresponding matrix takes the form of a block tridiagonal matrix = tridiag ( i;i 1 i;i i;i+1), or ¾ ¿ 11 12 0 6 7 6 21 22 23 7 6 7 = 6 .. .. .. 7 4 . 5 0 Ò;Ò 1 Ò;Ò \ For definiteness we let the boundary meshline Ω i Ω i+1 belong to Ωi . In order to preserve the sparsity pattern we shall factor without use of permutations. Naturally, the lines of intersection do not have to be straight. – p.4/31 FMB - NLA Block-tridiagonal matrices How do we factorize a (block)-tridiagonal matrix ? . – p.5/31 FMB - NLA Ä Í Let be block-tridiagonal, and expressed as = A A A . 1 Convenient: seek Ä , , Í such that = Ä Í and where Ä Ä Í Í is diagonal, = A and = A Direct computation: 1 1 Ä Í Ä Í Ä Í Ä Í =( A ) ( A )= A A + A A = A A A 1 Ä Í i.e., A = + A A Ä Í Important: A and A are strictly lower and upper triangular. – p.6/31 FMB - NLA 1 = ÄD Í A for pointwise tridiagonal matrices 2 11 3 2 1 3 6 22 7 6 2 7 6 7 = 6 7 + 6 .. 7 6 .. 7 6 . 7 6 . 7 6 7 6 7 6 7 6 7 4 ÒÒ 5 4 Ò 5 2 0 3 21 1 3 20 12 3 0 1 0 6 2 1 7 6 2 7 6 2 3 7 6 7 6 7 6 7 6 .. 7 6 .. 7 6 .. 7 6 . 7 6 . 7 6 . 7 6 7 6 7 6 7 6 7 6 7 6 7 Ò 4 ÒÒ 1 05 4 1 5 4 05 Factorization algorithm: 1 = 1;1 i;i 1 i 1;i i = i;i i 1 . – p.7/31 FMB - NLA 1 = ÄD Í A for pointwise tridiagonal matrices 1 Solution of systems with Ä Í . – p.8/31 FMB - NLA Block-tridiagonal matrices Ä Í Let be block-tridiagonal, and expressed as = A A A . One can envisage three major versions of the factorization algorithm: 1 Ä Í (i) =( A ) ( A ) 1 1 Ä Í (ii) =( A ) ( A ) e 1 e Á Ä Á Í (iii) =( A ) ( A ) 1 i = ii i;i i ;i 2 = 1 i 1 1 1 11 1 i =( ii i;i 1 i 1 i 1;i ) 1 0 = 0 (Inverse free e e Ä Ä Í Í substitutions), where A = A , A = A . 1 e 1 e 1 Á Í Á Ä Here =( A ) ( A ) × e 1 e 2 e 2 e e 1 Á Í Á Í Á Í Á Í Á Ä ( A ) =( + ) ( + )( + A ) and similarly for ( A ) . A A . – p.9/31 FMB - NLA Existence of factorization for block-tridiagonal matrices (Ö ) We assume that the matrices are real. It can be shown that ÖÖ is always nonsingular for two important classes of matrices, namely for ¯ matrices which are positive definite, i.e., Ì Ò Ü Ü 0 for all Ü ¾ Ê ( if has order Ò) ¯ blockwise generalized diagonally dominant matrices (also called block À -matrices), i.e., for which the diagonal matrices are nonsingular and 1 1 k k k k Ò i ;i + i ;i 1 = 1 2 1 ii +1 ii (here 0;1 = 0 Ò+1;Ò = 0). – p.10/31 FMB - NLA = 1; 2; : : : ; Ò 1 A factorization passes through stages Ö For two important classes of matrices there holds that the successive top blocks, i.e., pivot matrices which arise after every factorization stage, are nonsingular. (Ö ) At every stage the current matrix is partitioned in 2 ¢ 2 blocks, 211 12 ¡ ¡ ¡ 0 3 (1) (1) 21 22 23 ¡ ¡ ¡ 0 (1) 6 7 211 12 3 = = 6 7 = 6 . 7 (1) (1) . 6 ¡ ¡ ¡ . 7 4 21 22 5 6 7 6 7 ÒÒ 4 0 0 ÒÒ 1 5 1 Ö Ö ( ) ( ) (Ö ) At the Ö th stage we compute 11 = 11 and factor , 1 (Ö ) (Ö ) Á 0 (Ö ) 2 3 2 11 12 3 = (Ö ) (Ö ) (Ö +1) Á 4 21 11 5 4 0 5 Ö Ö Ö Ö (Ö +1) ( ) ( ) ( ) ( ) where = 22 21 11 12 is the so-called Schur complement. – p.11/31 FMB - NLA Existence of factorization for block-tridiagonal matrices The factorization of a block matrix is equivalent to the block Gaussian (Ö ) elimination of it. Note then that the only block in 22 which will be (1) affected by the elimination (of block matrix 21 ) is the top block of the (Ö ) (Ö +1) block tridiagonal decomposition of 22 , i.e., 11 , the new pivot matrix. We show that for the above matrix classes the Schur complement Ö Ö Ö Ö (Ö +1) ( ) ( ) ( ) ( ) (Ö ) = 22 21 11 12 belongs to the same class as , i.e., in particular that the pivot entries are nonsingular. – p.12/31 FMB - NLA " # 11 12 Lemma 1 Let = be positive definite. Then ii = 1 2 and the 21 22 1 Schur complement Ë = 22 21 11 12 are also positive definite. Ì Ì Ì Ì Ì Proof There holds Ü Ü = Ü1 11 Ü1 for all Ü =(Ü1 0). Hence Ü1 11 Ü1 0 for all Ü1, i.e., 11 is positive definite. Similarly, it can be shown that 22 is positive definite. Since is nonsingular then Ì Ì Ì Ì 1 Ü Ü = Ü Ü = Ý Ý for Ý = Ü Ì 1 so Ý Ý 0 for all Ý =6 0 i.e., the inverse of is also positive definite. Use now the explicit form of the inverse computed by use of the factorization , Á 11 12 11 0 Á 0 £ £ 1 2 3 2 3 2 3 2 3 = = 1 1 40 Á 5 4 0 Ë 5 4 21 11 Á 5 4£ Ë 5 where £ indicates entries not important for the present discussion. 1 1 Hence, since is positive definite, so is its diagonal block Ë . Hence, the inverse of 1 Ë , and therefore also Ë , is positive definite. – p.13/31 FMB - NLA Ö (Ö ) (Ö +1) ( +1) Corollary 1 When is positive definite, and in particular, 11 are positive definite. (Ö +1) (Ö ) (Ö +1) Proof is a Schur complement of so by Lemma 1, (Ö ) is positive definite when is. In particular, its top diagonal block is positive definite. – p.14/31 FMB - NLA " # 11 12 Lemma 2 Let = be blockwise generalized diagonally dominant 21 22 1 where is block tridiagonal. Then the Schur complement Ë = 22 21 11 12 is also generalized diagonally dominant. Proof (Hint) Since the only matrix block in Ë which has been changed (Ö +1) from 22 is its top block 11 to 11 it suffices to show that 11 is non- singular and the first block column is generalized diagonally dominant. – p.15/31 Linear recursions Consider the solution of the linear system of equations x = b, where has been already factorized as = ÄÍ or = Ä Í . The matrices Ä fÐ g Í fÙ g = i;j and = i;j are lower- and upper-triangular, respectively. To compute x, we must perform two steps: forward substitution: Äz = b, i.e., i 1 È Þ Þ Ð Þ ¡ ¡ ¡ Ò 1 = 1 i = i i;k k = 2 3 k =1 backward substitution: Í x = z, i.e., Ò È Ü Þ Ü Þ Ù Ü Ò Ò ¡ ¡ ¡ Ò = Ò i = i i;k k = 1 2 1 k =i+1 . – p.16/31 FMB - NLA While the implementation of the forward and back-substitution on a serial computer is trivial, to implement them on a vector or parallel computer system is problematic. The reason is that the relations are particular examples of a linear recursion which is an inherently sequential process. A general Ñ-level recurrence relation reads as Ü Ü Ü ¡ ¡ ¡ Ü i = 1;i i 1 + 2;i i 2 + + Ñ;i i Ñ + i and the performance of its straightforward vector or parallel implementation is degraded due to the existing backwards data dependencies. – p.17/31 FMB - NLA Block-tridiagonal matrices Can we speedup somehow the solution of systems with bi- or tri-diagonal matrices ? . – p.18/31 FMB - NLA Multifrontal solution methods * * * * * * * * * * * * * * * * 1 3 5 7 9 8 6 4 2 * * * * * * x n0 * * * Ü (a) Two way frontal method ( Ò0 is the (b) The structure of the middle node matrix A Any tridiagonal or block tridiagonal matrix can be attacked in parallel from both ends, after a proper numbering of the unknowns It can be seen that we can work independently on the odd numbered and even numbered points until we have eliminated all entries except the final corner one. – p.19/31 FMB - NLA Hence, the factorization and forward substitution can occur in parallel for the two fronts (the even and the odd).