Copyright © by SIAM. Unauthorized Reproduction of This Article Is Prohibited. PARALLEL BIDIAGONALIZATION of a DENSE MATRIX 827
Total Page:16
File Type:pdf, Size:1020Kb
SIAM J. MATRIX ANAL. APPL. c 2007 Society for Industrial and Applied Mathematics Vol. 29, No. 3, pp. 826–837 ∗ PARALLEL BIDIAGONALIZATION OF A DENSE MATRIX † ‡ ‡ § CARLOS CAMPOS , DAVID GUERRERO , VICENTE HERNANDEZ´ , AND RUI RALHA Abstract. A new stable method for the reduction of rectangular dense matrices to bidiagonal form has been proposed recently. This is a one-sided method since it can be entirely expressed in terms of operations with (full) columns of the matrix under transformation. The algorithm is well suited to parallel computing and, in order to make it even more attractive for distributed memory systems, we introduce a modification which halves the number of communication instances. In this paper we present such a modification. A block organization of the algorithm to use level 3 BLAS routines seems difficult and, at least for the moment, it relies upon level 2 BLAS routines. Nevertheless, we found that our sequential code is competitive with the LAPACK DGEBRD routine. We also compare the time taken by our parallel codes and the ScaLAPACK PDGEBRD routine. We investigated the best data distribution schemes for the different codes and we can state that our parallel codes are also competitive with the ScaLAPACK routine. Key words. bidiagonal reduction, parallel algorithms AMS subject classifications. 15A18, 65F30, 68W10 DOI. 10.1137/05062809X 1. Introduction. The problem of computing the singular value decomposition (SVD) of a matrix is one of the most important operations in numerical linear algebra and is employed in a variety of applications. The SVD is defined as follows. × For any rectangular matrix A ∈ Rm n (we will assume that m ≥ n), there exist ∈ Rm×m ∈ Rn×n ΣA ∈ Rm×n two orthogonal matrices U and V and a matrix Σ = 0 , t where ΣA = diag (σ1,...,σn) is a diagonal matrix, such that A = UΣV . The values σ1 ≥···≥σn ≥ 0 are called the singular values of A. To compute the SVD of a dense matrix, an important class of methods starts with Householder bidiagonalization [11], [12], [14], [15], [19], [20], [21], reducing A to upper bidiagonal form B ∈ Rn×n, from which the singular values are computed in an iterative manner [2], [16], [18], [22]. m×m n×n The Householder bidiagonalization computes UB ∈ R and VB ∈ R ,as products of Householder matrices, such that ⎡ ⎤ α1 β2 ⎢ ⎥ ⎢ .. ⎥ B t ⎢ α2 . ⎥ (1) A = UB VB, where B = ⎢ ⎥ . 0 ⎣ .. ⎦ . βn αn The classical method for producing this decomposition is a two-sided algorithm which employs both premultiplication and postmultiplication by Householder matri- ∗Received by the editors March 30, 2005; accepted for publication (in revised form) by J. L. Barlow February 1, 2007; published electronically July 25, 2007. http://www.siam.org/journals/simax/29-3/62809.html †Departamento de Matem´atica, Escola Superior de Tecnologia e Gest˜ao (Leiria), Campus 2, Morro do Lena, Alto do Vieiro, 2401-951 Leiria, Portugal ([email protected]). ‡Departamento de Sistemas Inform´aticos y Computaci´on, Universidad Polit´ecnica de Valencia, Camino de Vera s/n, E-46022 Valencia, Spain ([email protected], [email protected]). §Departamento de Matem´atica, Universidade do Minho, 4710-057 Braga, Portugal (r ralha@ math.uminho.pt). The research of this author was supported by the Portuguese Foundation for Science and Technology through the research program POCI 2010. 826 Copyright © by SIAM. Unauthorized reproduction of this article is prohibited. PARALLEL BIDIAGONALIZATION OF A DENSE MATRIX 827 ces. In order to establish an algorithm which is better suited to parallel implemen- tation than the standard bidiagonalization method, Ralha [25], [26], and Ralha and Mackiewicz [27] proposed a new technique that uses only multiplication on the right side of A by Householder matrices. Inspired by Ralha’s method, Barlow, Bosner, and Drmaˇc [4] proposed a new stable method for the reduction of rectangular dense matrices to the bidiagonal form. In this paper we present both methods and propose a modification to the Barlow method which halves the number of communication instances in the parallel imple- mentation, making the algorithm even more attractive for distributed memory sys- tems. This paper is organized as follows. In section 2 we describe the new bidiagonaliza- tion methods. Section 3 deals with the sequential and parallel implementations, using LAPACK [1] and ScaLAPACK [8] routines. In section 4 we analyze the experimental results of our numerical tests. Section 5 summarizes our conclusions and future work. 2. New bidiagonalization methods. 2.1. Ralha bidiagonal reduction. Given a rectangular dense matrix A ∈ × Rm n, the bidiagonalization method proposed by Ralha is comprised of two stages. The first stage consists of a sequence of n − 2 Householder transformations (2) Ar = Ar−1 · diag (Ir,Hr)(r =1 ,...,n− 2), where Ir is the identity matrix of order r, A0 = A, and the columns ai and aj of the final matrix An−2 satisfy t | − | (3) aiaj = 0 for i j > 1. This can be understood as an implicit reduction of the symmetric semidefinite positive matrix AtA to tridiagonal form. In the rth step, the construction of the Householder vector vr in 2 t (4) H = I − − v v r n r vt v r r r r requires the computation of n − r dot products involving the appropriate columns of Ar−1. Having produced An−2, the second stage is a variant of the Gram–Schmidt or- − thogonalization method that produces the factorization An 2 = QB, where B is the required upper bidiagonal matrix. Representing by ai and qi the columns of An−2 and Q, respectively, we have ⎡ ⎤ α1 β2 ⎢ ⎥ ⎢ .. ⎥ ⎢ α2 . ⎥ ⎢ . ⎥ ⎢ .. ⎥ ⎢ βi ⎥ (5) a1 ··· ai ··· an = q1 ··· qi ··· qn ⎢ ⎥ ⎢ .. ⎥ ⎢ αi . ⎥ ⎢ ⎥ ⎣ . ⎦ .. β n αn with a a − β q − (6) q = 1 ,q= i i i 1 (i =2,...,n) . 1 α i α 1 i Copyright © by SIAM. Unauthorized reproduction of this article is prohibited. 828 C. CAMPOS, D. GUERRERO, V. HERNANDEZ,´ AND R. RALHA Each βi is chosen to make qi orthogonal to qi−1, and each αi is such that qi 2 =1; with these conditions, we get from (6) (7) α1 = a1 2 , t βi = aiqi−1 (i =2,...,n) , α = a − β q − (i =2,...,n ) . i i i i 1 2 The first stage of this method is perfectly stable in the sense that the computed An−2 satisfies (8) An−2 =(A + E)P, where P is exactly orthogonal and (9) E ≤ g(m, n)εM A 2 2 for some modestly growing function g(m, n) and machine epsilon εM [24, pp. 94–96]. Furthermore, if A = DX, where D is diagonal and cond (X) cond(A), then these one-sided orthogonal transformations preserve the small singular values better than two-sided transformations [10]. It may happen that some nonadjacent columns of An−2 are not orthogonal to working precision1 and, even when all those columns are numerically orthogonal, the process of producing a bidiagonal B from An−2 may bring trouble. To give an insight into the problem, consider the following triangular matrix: ⎡ ⎤ 11000 ⎢ − − − ⎥ ⎢ 0109 1109 10 7 ⎥ ⎢ − − − ⎥ R = ⎢ 00103 10 6 − 10 4 ⎥ . ⎣ 000110 −11 ⎦ 00001 We have ⎡ ⎤ 11 0 0 0 ⎢ − − − − ⎥ ⎢ 1 1+10 18 10 9 10 18 10 16 ⎥ t ⎢ 010−9 1+10−6 2 × 10− 9 0 ⎥ R R = ⎢ ⎥ , ⎣ 010−18 2 × 10−9 1+O(10−12) O(−10−10) ⎦ − − − 01016 0 O(−10 10)1+O(10 8) which differs from a tridiagonal matrix by quantities not larger than 10−16.If t t A = QR, with Q orthogonal, then A A = R R and the nonadjacent columns of A are orthogonal to working precision; however, from the uniqueness of the QR de- composition, it follows that there is no bidiagonal B satisfying A = QB; in other words, small perturbations outside the tridiagonal band of AtA cause much larger (as large as O(10−4) in this case) perturbations in B; i.e., the problem is ill-conditioned. 1 See [26] for an example: the Lauchli matrix L(n, μ) with n = 7 and μ = εM . Copyright © by SIAM. Unauthorized reproduction of this article is prohibited. PARALLEL BIDIAGONALIZATION OF A DENSE MATRIX 829 2.2. Barlow bidiagonal reduction. Barlow, Bosner, and Drmaˇc [4] recently proposed a stable algorithm which consists, essentially, in interleaving the two stages of Ralha’s method. More precisely, 1. the vector qr is computed immediately after applying the Householder trans- formation Hr; and 2. the Householder vector used in the transformation Hr+1 is computed from t xr = A (:,r+1:n) qr. t Those authors also noted that the computation of the dot product βr = qr−1A (:,r) may be avoided (this is also the case with Ralha’s algorithm); in [4], an extensive er- ror analysis of the proposed algorithm is carried out which shows that the method is always able to compute an upper bidiagonal matrix B in a backward stable manner. That is, we have, for each k =1,...,n, | − |≤ 2 (10) σk(B) σk(A) f(m, n)εM A 2 + O εM for some modestly growing function f(m, n). As was already the case with Ralha’s bidiagonalization, there are ill-conditioned matrices where this algorithm obtains small relative errors for all the singular values [26], [10]. Furthermore, the orthogo- nality of the columns of U is similar to that of the matrix Q in the QR factorization by the modified Gram–Schmidt method [6], [7]; that is, U may be far from orthogonal (see [4, Example 3.1]). Nevertheless, the leading left singular vectors of A can be recovered with good orthogonality (see [4, Corollary 3.20]). The algorithm may be stated simply as follows (for a more complete statement see [4]).