Condition Numbers for Lanczos Bidiagonalization with Complete Reorthogonalization A

Total Page:16

File Type:pdf, Size:1020Kb

Condition Numbers for Lanczos Bidiagonalization with Complete Reorthogonalization A View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Linear Algebra and its Applications 371 (2003) 315–331 www.elsevier.com/locate/laa Condition numbers for Lanczos bidiagonalization with complete reorthogonalization A. Malyshev a,∗, M. Sadkane b aDepartment of Informatics, University of Bergen, N-5020 Bergen, Norway bDépartement de Mathématiques, Université de Bretagne Occidentale, 6, Av. Le Gorgeu, BP 809, 29285 Brest Cedex, France Received 4 March 2002; accepted 4 March 2003 Submitted by L.N. Trefethen Abstract We derive exact and computable formulas for the condition numbers characterizing the forward instability in Lanczos bidiagonalization with complete reorthogonalization. One se- ries of condition numbers is responsible for stability of Krylov spaces, the second for stability of orthonormal bases in the Krylov spaces and the third for stability of the bidiagonal form. The behaviors of these condition numbers are illustrated numerically on several examples. © 2003 Elsevier Inc. All rights reserved. Keywords: Lanczos bidiagonalization; Krylov subspaces; Condition numbers; Stability 1. Introduction Bidiagonalization of matrices by orthogonal transformations is an important part of many algorithms of numerical linear algebra. For example, it is the first step in the widely used Singular Value Decomposition algorithm [5]. Partial or truncated bidiagonalization by means of the Lanczos process is useful in approximation of extreme and especially largest singular values and the associated singular vectors of large sparse matrices and in solving least squares problems [1]. ∗ Corresponding author. E-mail addresses: [email protected] (A. Malyshev), [email protected] (M. Sadkane). 0024-3795/$ - see front matter 2003 Elsevier Inc. All rights reserved. doi:10.1016/S0024-3795(03)00482-8 316 A. Malyshev, M. Sadkane / Linear Algebra and its Applications 371 (2003) 315–331 The standard Lanczos bidiagonalization of a matrix A ∈ Rm×n, m n, is briefly described as follows (more details are found in [1,5]): Algorithm 1 (Lanczos Bidiagonalization of A). n • Choose v1 ∈ R with v12 = 1 • Set s = Av1; α1 =s2; u1 = s/α1 • for j = 2, 3,...do T t = A uj−1 − αj−1vj−1; βj−1 =t2; vj = t/βj−1 s = Avj − βj−1uj−1; αj =s2; uj = s/αj end for The success of this algorithm is partly due to its simplicity. For its implemen- tation, one only needs two subroutines that efficiently compute the matrix–vector products with A and with its transpose AT. If the iteration is continued until j = n in exact arithmetic, then Algorithm 1 generates two matrices with orthonormal col- n×n m×n umns, V =[v1,v2,...,vn]∈R and U1 =[u1,u2,...,un]∈R ,aswellas an upper bidiagonal matrix α1 β1 α2 β2 .. .. B1 = . (1) .. . βn−1 αn such that = T = T AV U1B1,AU1 VB1 . (2) Accurate information on largest singular values and vectors of A is often available after a few iterations j n of Algorithm 1. That is, only the leading j × j part of B1 and the first j columns of V and U1 suffice to approximate the desired singular values and vectors of A (see [1, Section 7.6.4]). In exact arithmetic, the vectors v1,...,vj and u1,...,uj form orthonormal bases of the Krylov spaces T T T j−1 Kj (A A, v1) = Span v1,A Av1,...,(A A) v1 (3) and T T T j−1 Lj (AA ,u1) = Span u1,AA u1,...,(AA ) u1 (4) respectively. These bases will be denoted by Bj ={v1,v2,...,vj } and Cj ={u1,u2, ...,uj }. The Krylov spaces provide a convenient tool for derivation of theoretical estimates of the convergence rate to the extreme singular values. Note that the vectors u1,...,un can be completed by orthonormal vectors un+1,...,um so that the matrix m×m U =[u1,...,um]∈R is orthogonal and satisfies AV = UB, ATU = VBT (5) A. Malyshev, M. Sadkane / Linear Algebra and its Applications 371 (2003) 315–331 317 with B × B = 1 ∈ Rm n. (6) 0 Unfortunately, the vectors vj and uj generated by the Lanczos iteration of Al- gorithm 1 in finite precision quickly lose their orthogonality as j increases. A more expensive algorithm based on a complete reorthogonalization Lanczos scheme with the help of Householder transformations does not suffer from the loss of orthogonal- ity. This algorithm is equivalent to Algorithm 1 in exact arithmetic when αj =/ 0and βj =/ 0forallj. Below we give a pseudocode of this algorithm and refer to a similar scheme for v the standard symmetric Lanczos in [5, p. 487]. Each matrix of the form Hj in this pseudocode stands for a Householder reflector. Algorithm 2 (Lanczos Bidiagonalization with Reorthogonalization). • ∈ n = v v = Choose v1 R with v1 2 1 and determine H1 so that H1 v1 e1 • = u u = = u Set s Av1, determine H1 so that H1 s α1e1 and set u1 H1 e1. • for j = 2, 3,...do = v v T − t (Hj−1 ...H1 )(A uj−1 αj−1vj−1) v v = ˆ ˆ T determine Hj so that Hj t (t1,...,tj−1,βj−1, 0,...,0) = v v vj H1 ...Hj ej = u u − s (Hj−1 ...H1 )(Avj βj−1uj−1) u u = ˆ ˆ T determine Hj so that Hj s (s1,...,sj−1,αj , 0,...,0) = u u set uj H1 ...Hj ej end for Ideally, starting with v1, all that we can hope to get from Algorithm 2 in finite precision is the exact bidiagonalization of a perturbation A + of A.Morepre- cisely, we assume that Algorithm 2 computes orthogonal matrices V =[ v1,..., vn], where v1 = v1,andU =[ u1,..., um] and an upper bidiagonal matrix B such that the relations (A + )V = U B, (A + )TU = V B T (7) are satisfied exactly. The main concern of this paper is the analysis of the difference between the exact quantities U, V , B and their computed counterparts U , V , B under variation of an infinitesimal perturbation . More precisely, we are interested in the stability anal- ysis of the Krylov spaces, the corresponding orthonormal bases and the bidiagonal reduction generated by the idealized Algorithm 2. When large perturbations occur in the computed U , V , B , such a phenomenon is called forward instability of the Lanczos bidiagonalization. More details on this phenomenon in the context of QR factorization can be found in [9,10]. 318 A. Malyshev, M. Sadkane / Linear Algebra and its Applications 371 (2003) 315–331 Similar topics have been previously studied by Carpraux et al. [2], Kuznetsov [7] and Paige and Van Dooren [8]. The authors of [2] consider the Krylov spaces and orthonormal bases generated by the Arnoldi method and develop corresponding condition numbers using a first order analysis under infinitesimal perturbations of the matrix. The theory of [2] is extended in [7] to the case where the starting vector of the Arnoldi iteration is also subject to perturbation. In [8], the authors use the perturbation techniques due to Chang [3] to thoroughly analyze sensitivity of the nonsymmetric (and symmetric) Lanczos tridiagonalization. They also derive con- dition numbers for the corresponding Krylov bases and spaces. As expected, their condition numbers coincide with those from [2] in the symmetric case. Our approach is similar to the one in [2], and the reader is assumed to be familiar with the arguments developed in this reference. In Sections 2 and 3 we derive formu- las for the condition numbers of the Krylov spaces and orthonormal bases generated by Algorithm 2 as well as the condition numbers associated with the bidiagonal reduction. Section 4 is devoted to numerical experiments. Throughout this paper, the symbol 2 denotes the Euclidean norm or its induced matrix norm. The symbol F denotes the Frobenius norm. The space spanned by the columns of a matrix B is denoted by Span {B}. The identity matrix of order j is denoted by Ij or just I when its order is. clear from the context. Its kth column i = . × is denoted by ek. The notation Ij (0i−j .Ij ) with j i denotes the j i matrix − j = whose first i j columns are zero. Note that Ij Ij .WealsousetheMATLAB n×m style notation: if E = (ek,l)1kn;1lm ∈ R and 1 i<j m,thenE(i1 : : ≡ j1,i2 j2) (ek,l)i1kj1;i2lj2 . 2. Derivation of formulas Let us look for V and U in the form V = (I + X)V and U = (I + Y)U ,where n×n m×m X ∈ R and Y ∈ R , and assume that the starting vector v1 is not perturbed. The latter immediately implies that Xv1 = 0. Since we work with infinitesimal perturbations only, the matrices X and Y are skew-symmetric. Owing to the orthogonality of V and U , B = U T(A + )V. Discarding all quadratic terms in the identity B = U T(I + Y T)(A + )(I + X)V and using (5), we deduce B − B + U T V + (U TY TU)B + B(V TXV ) = 0. The matrices X = V TXV and Y = U TYU are also skew-symmetric and satisfy the following Sylvester matrix equation with = U T V : YB − BX = + B − B. (8) := T = { } The Krylov space K j Kj (A A, v1) Span v1,...,vj is the linear span Ij := + T + of the columns of V 0 . Its perturbation Kj Kj ((A ) (A ), v1) is that A. Malyshev, M. Sadkane / Linear Algebra and its Applications 371 (2003) 315–331 319 of (I + X)V Ij . The definition of matrices X and Y related by the Sylvester equa- 0 tion (8) clearly suggests that it is more convenient to work with Ij and V T(I + 0 Ij = Ij + Ij Ij + Ij Ij X)V 0 0 X 0 instead of V 0 and (I X)V 0 .
Recommended publications
  • 18.06 Linear Algebra, Problem Set 2 Solutions
    18.06 Problem Set 2 Solution Total: 100 points Section 2.5. Problem 24: Use Gauss-Jordan elimination on [U I] to find the upper triangular −1 U : 2 3 2 3 2 3 1 a b 1 0 0 −1 4 5 4 5 4 5 UU = I 0 1 c x1 x2 x3 = 0 1 0 : 0 0 1 0 0 1 −1 Solution (4 points): Row reduce [U I] to get [I U ] as follows (here Ri stands for the ith row): 2 3 2 3 1 a b 1 0 0 (R1 = R1 − aR2) 1 0 b − ac 1 −a 0 4 5 4 5 0 1 c 0 1 0 −! (R2 = R2 − cR2) 0 1 0 0 1 −c 0 0 1 0 0 1 0 0 1 0 0 1 ( ) 2 3 R1 = R1 − (b − ac)R3 1 0 0 1 −a ac − b −! 40 1 0 0 1 −c 5 : 0 0 1 0 0 1 Section 2.5. Problem 40: (Recommended) A is a 4 by 4 matrix with 1's on the diagonal and −1 −a; −b; −c on the diagonal above. Find A for this bidiagonal matrix. −1 Solution (12 points): Row reduce [A I] to get [I A ] as follows (here Ri stands for the ith row): 2 3 1 −a 0 0 1 0 0 0 6 7 60 1 −b 0 0 1 0 07 4 5 0 0 1 −c 0 0 1 0 0 0 0 1 0 0 0 1 2 3 (R1 = R1 + aR2) 1 0 −ab 0 1 a 0 0 6 7 (R2 = R2 + bR2) 60 1 0 −bc 0 1 b 07 −! 4 5 (R3 = R3 + cR4) 0 0 1 0 0 0 1 c 0 0 0 1 0 0 0 1 2 3 (R1 = R1 + abR3) 1 0 0 0 1 a ab abc (R = R + bcR ) 60 1 0 0 0 1 b bc 7 −! 2 2 4 6 7 : 40 0 1 0 0 0 1 c 5 0 0 0 1 0 0 0 1 Alternatively, write A = I − N.
    [Show full text]
  • [Math.RA] 19 Jun 2003 Two Linear Transformations Each Tridiagonal with Respect to an Eigenbasis of the Ot
    Two linear transformations each tridiagonal with respect to an eigenbasis of the other; comments on the parameter array∗ Paul Terwilliger Abstract Let K denote a field. Let d denote a nonnegative integer and consider a sequence ∗ K p = (θi,θi , i = 0...d; ϕj , φj, j = 1...d) consisting of scalars taken from . We call p ∗ ∗ a parameter array whenever: (PA1) θi 6= θj, θi 6= θj if i 6= j, (0 ≤ i, j ≤ d); (PA2) i−1 θh−θd−h ∗ ∗ ϕi 6= 0, φi 6= 0 (1 ≤ i ≤ d); (PA3) ϕi = φ1 + (θ − θ )(θi−1 − θ ) h=0 θ0−θd i 0 d i−1 θh−θd−h ∗ ∗ (1 ≤ i ≤ d); (PA4) φi = ϕ1 + (Pθ − θ )(θd−i+1 − θ0) (1 ≤ i ≤ d); h=0 θ0−θd i 0 −1 ∗ ∗ ∗ ∗ −1 (PA5) (θi−2 − θi+1)(θi−1 − θi) P, (θi−2 − θi+1)(θi−1 − θi ) are equal and independent of i for 2 ≤ i ≤ d − 1. In [13] we showed the parameter arrays are in bijection with the isomorphism classes of Leonard systems. Using this bijection we obtain the following two characterizations of parameter arrays. Assume p satisfies PA1, PA2. Let ∗ ∗ A, B, A ,B denote the matrices in Matd+1(K) which have entries Aii = θi, Bii = θd−i, ∗ ∗ ∗ ∗ ∗ ∗ Aii = θi , Bii = θi (0 ≤ i ≤ d), Ai,i−1 = 1, Bi,i−1 = 1, Ai−1,i = ϕi, Bi−1,i = φi (1 ≤ i ≤ d), and all other entries 0. We show the following are equivalent: (i) p satisfies −1 PA3–PA5; (ii) there exists an invertible G ∈ Matd+1(K) such that G AG = B and G−1A∗G = B∗; (iii) for 0 ≤ i ≤ d the polynomial i ∗ ∗ ∗ ∗ ∗ ∗ (λ − θ0)(λ − θ1) · · · (λ − θn−1)(θi − θ0)(θi − θ1) · · · (θi − θn−1) ϕ1ϕ2 · · · ϕ nX=0 n is a scalar multiple of the polynomial i ∗ ∗ ∗ ∗ ∗ ∗ (λ − θd)(λ − θd−1) · · · (λ − θd−n+1)(θ − θ )(θ − θ ) · · · (θ − θ ) i 0 i 1 i n−1 .
    [Show full text]
  • Self-Interlacing Polynomials Ii: Matrices with Self-Interlacing Spectrum
    SELF-INTERLACING POLYNOMIALS II: MATRICES WITH SELF-INTERLACING SPECTRUM MIKHAIL TYAGLOV Abstract. An n × n matrix is said to have a self-interlacing spectrum if its eigenvalues λk, k = 1; : : : ; n, are distributed as follows n−1 λ1 > −λ2 > λ3 > ··· > (−1) λn > 0: A method for constructing sign definite matrices with self-interlacing spectra from totally nonnegative ones is presented. We apply this method to bidiagonal and tridiagonal matrices. In particular, we generalize a result by O. Holtz on the spectrum of real symmetric anti-bidiagonal matrices with positive nonzero entries. 1. Introduction In [5] there were introduced the so-called self-interlacing polynomials. A polynomial p(z) is called self- interlacing if all its roots are real, semple and interlacing the roots of the polynomial p(−z). It is easy to see that if λk, k = 1; : : : ; n, are the roots of a self-interlacing polynomial, then the are distributed as follows n−1 (1.1) λ1 > −λ2 > λ3 > ··· > (−1) λn > 0; or n (1.2) − λ1 > λ2 > −λ3 > ··· > (−1) λn > 0: The polynomials whose roots are distributed as in (1.1) (resp. in (1.2)) are called self-interlacing of kind I (resp. of kind II). It is clear that a polynomial p(z) is self-interlacing of kind I if, and only if, the polynomial p(−z) is self-interlacing of kind II. Thus, it is enough to study self-interlacing of kind I, since all the results for self-interlacing of kind II will be obtained automatically. Definition 1.1. An n × n matrix is said to possess a self-interlacing spectrum if its eigenvalues λk, k = 1; : : : ; n, are real, simple, are distributed as in (1.1).
    [Show full text]
  • Accurate Singular Values of Bidiagonal Matrices
    d d Accurate Singular Values of Bidiagonal Matrices (Appeared in the SIAM J. Sci. Stat. Comput., v. 11, n. 5, pp. 873-912, 1990) James Demmel W. Kahan Courant Institute Computer Science Division 251 Mercer Str. University of California New York, NY 10012 Berkeley, CA 94720 Abstract Computing the singular values of a bidiagonal matrix is the ®nal phase of the standard algo- rithm for the singular value decomposition of a general matrix. We present a new algorithm which computes all the singular values of a bidiagonal matrix to high relative accuracy indepen- dent of their magnitudes. In contrast, the standard algorithm for bidiagonal matrices may com- pute sm all singular values with no relative accuracy at all. Numerical experiments show that the new algorithm is comparable in speed to the standard algorithm, and frequently faster. Keywords: singular value decomposition, bidiagonal matrix, QR iteration AMS(MOS) subject classi®cations: 65F20, 65G05, 65F35 1. Introduction The standard algorithm for computing the singular value decomposition (SVD ) of a gen- eral real matrix A has two phases [7]: = T 1) Compute orthogonal matrices P 11and Q such that B PAQ 11is in bidiagonal form, i.e. has nonzero entries only on its diagonal and ®rst superdiagonal. Σ= T 2) Compute orthogonal matrices P 22and Q such that PBQ 22is diagonal and nonnega- σ Σ tive. The diagonal entries i of are the singular values of A. We will take them to be σ≥σ = sorted in decreasing order: ii+112. The columns of Q QQ are the right singular vec- = tors,andthecolumnsofP PP12are the left singular vectors.
    [Show full text]
  • Design and Evaluation of Tridiagonal Solvers for Vector and Parallel Computers Universitat Politècnica De Catalunya
    Design and Evaluation of Tridiagonal Solvers for Vector and Parallel Computers Author: Josep Lluis Larriba Pey Advisor: Juan José Navarro Guerrero Barcelona, January 1995 UPC Universitat Politècnica de Catalunya Departament d'Arquitectura de Computadors UNIVERSITAT POLITÈCNICA DE CATALU NYA B L I O T E C A X - L I B R I S Tesi doctoral presentada per Josep Lluis Larriba Pey per tal d'aconseguir el grau de Doctor en Informàtica per la Universitat Politècnica de Catalunya UNIVERSITAT POLITÈCNICA DE CATAl U>'YA ADA'UNÍIEÏRACÍÓ ;;//..;,3UMPf';S AO\n¿.-v i S -i i::-.« c:¿fCM or,re: iïhcc'a a la pàgina .rf.S# a;; 3 b el r;ú¡; Barcelona, 6 N L'ENCARREGAT DEL REGISTRE. Barcelona, de de 1995 To my wife Marta To my parents Elvira and José Luis "A journey of a thousand miles must begin with a single step" Lao-Tse Acknowledgements I would like to thank my parents, José Luis and Elvira, for their unconditional help and love to me. I will always be indebted to you. I also want to thank my wife, Marta, for being the sparkle in my life, for her support and for understanding my (good) moods. I thank Juanjo, my advisor, for his friendship, patience and good advice. Also, I want to thank Àngel Jorba for his unconditional collaboration, work and support in some of the contributions of this work. I thank the members of "Comissió de Doctorat del DAG" for their comments and specially Miguel Valero for his suggestions on the topics of chapter 4. I thank Mateo Valero for his friendship and always good advice.
    [Show full text]
  • Computing the Moore-Penrose Inverse for Bidiagonal Matrices
    УДК 519.65 Yu. Hakopian DOI: https://doi.org/10.18523/2617-70802201911-23 COMPUTING THE MOORE-PENROSE INVERSE FOR BIDIAGONAL MATRICES The Moore-Penrose inverse is the most popular type of matrix generalized inverses which has many applications both in matrix theory and numerical linear algebra. It is well known that the Moore-Penrose inverse can be found via singular value decomposition. In this regard, there is the most effective algo- rithm which consists of two stages. In the first stage, through the use of the Householder reflections, an initial matrix is reduced to the upper bidiagonal form (the Golub-Kahan bidiagonalization algorithm). The second stage is known in scientific literature as the Golub-Reinsch algorithm. This is an itera- tive procedure which with the help of the Givens rotations generates a sequence of bidiagonal matrices converging to a diagonal form. This allows to obtain an iterative approximation to the singular value decomposition of the bidiagonal matrix. The principal intention of the present paper is to develop a method which can be considered as an alternative to the Golub-Reinsch iterative algorithm. Realizing the approach proposed in the study, the following two main results have been achieved. First, we obtain explicit expressions for the entries of the Moore-Penrose inverse of bidigonal matrices. Secondly, based on the closed form formulas, we get a finite recursive numerical algorithm of optimal computational complexity. Thus, we can compute the Moore-Penrose inverse of bidiagonal matrices without using the singular value decomposition. Keywords: Moor-Penrose inverse, bidiagonal matrix, inversion formula, finite recursive algorithm.
    [Show full text]
  • Basic Matrices
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Linear Algebra and its Applications 373 (2003) 143–151 www.elsevier.com/locate/laa Basic matricesୋ Miroslav Fiedler Institute of Computer Science, Czech Academy of Sciences, 182 07 Prague 8, Czech Republic Received 30 April 2002; accepted 13 November 2002 Submitted by B. Shader Abstract We define a basic matrix as a square matrix which has both subdiagonal and superdiagonal rank at most one. We investigate, partly using additional restrictions, the relationship of basic matrices to factorizations. Special basic matrices are also mentioned. © 2003 Published by Elsevier Inc. AMS classification: 15A23; 15A33 Keywords: Structure rank; Tridiagonal matrix; Oscillatory matrix; Factorization; Orthogonal matrix 1. Introduction and preliminaries For m × n matrices, structure (cf. [2]) will mean any nonvoid subset of M × N, M ={1,...,m}, N ={1,...,n}. Given a matrix A (over a field, or a ring) and a structure, the structure rank of A is the maximum rank of any submatrix of A all entries of which are contained in the structure. The most important examples of structure ranks for square n × n matrices are the subdiagonal rank (S ={(i, k); n i>k 1}), the superdiagonal rank (S = {(i, k); 1 i<k n}), the off-diagonal rank (S ={(i, k); i/= k,1 i, k n}), the block-subdiagonal rank, etc. All these ranks enjoy the following property: Theorem 1.1 [2, Theorems 2 and 3]. If a nonsingular matrix has subdiagonal (su- perdiagonal, off-diagonal, block-subdiagonal, etc.) rank k, then the inverse also has subdiagonal (...)rank k.
    [Show full text]
  • Copyright © by SIAM. Unauthorized Reproduction of This Article Is Prohibited. PARALLEL BIDIAGONALIZATION of a DENSE MATRIX 827
    SIAM J. MATRIX ANAL. APPL. c 2007 Society for Industrial and Applied Mathematics Vol. 29, No. 3, pp. 826–837 ∗ PARALLEL BIDIAGONALIZATION OF A DENSE MATRIX † ‡ ‡ § CARLOS CAMPOS , DAVID GUERRERO , VICENTE HERNANDEZ´ , AND RUI RALHA Abstract. A new stable method for the reduction of rectangular dense matrices to bidiagonal form has been proposed recently. This is a one-sided method since it can be entirely expressed in terms of operations with (full) columns of the matrix under transformation. The algorithm is well suited to parallel computing and, in order to make it even more attractive for distributed memory systems, we introduce a modification which halves the number of communication instances. In this paper we present such a modification. A block organization of the algorithm to use level 3 BLAS routines seems difficult and, at least for the moment, it relies upon level 2 BLAS routines. Nevertheless, we found that our sequential code is competitive with the LAPACK DGEBRD routine. We also compare the time taken by our parallel codes and the ScaLAPACK PDGEBRD routine. We investigated the best data distribution schemes for the different codes and we can state that our parallel codes are also competitive with the ScaLAPACK routine. Key words. bidiagonal reduction, parallel algorithms AMS subject classifications. 15A18, 65F30, 68W10 DOI. 10.1137/05062809X 1. Introduction. The problem of computing the singular value decomposition (SVD) of a matrix is one of the most important operations in numerical linear algebra and is employed in a variety of applications. The SVD is defined as follows. × For any rectangular matrix A ∈ Rm n (we will assume that m ≥ n), there exist ∈ Rm×m ∈ Rn×n ΣA ∈ Rm×n two orthogonal matrices U and V and a matrix Σ = 0 , t where ΣA = diag (σ1,...,σn) is a diagonal matrix, such that A = UΣV .
    [Show full text]
  • Moore–Penrose Inverse of Bidiagonal Matrices. I
    PROCEEDINGS OF THE YEREVAN STATE UNIVERSITY Physical and Mathematical Sciences 2015, № 2, p. 11–20 Mathematics MOORE–PENROSE INVERSE OF BIDIAGONAL MATRICES.I 1 ∗ 2∗∗ Yu. R. HAKOPIAN , S. S. ALEKSANYAN ; 1Chair of Numerical Analysis and Mathematical Modeling YSU, Armenia 2Chair of Mathematics and Mathematical Modeling RAU, Armenia In the present paper we deduce closed form expressions for the entries of the Moore–Penrose inverse of a special type upper bidiagonal matrices. On the base of the formulae obtained, a finite algorithm with optimal order of computational complexity is constructed. MSC2010: 15A09; 65F20. Keywords: generalized inverse, Moore–Penrose inverse, bidiagonal matrix. Introduction. For a real m × n matrix A, the Moore–Penrose inverse A+ is the unique n × m matrix that satisfies the following four properties: AA+A = A; A+AA+ = A+ ; (A+A)T = A+A; (AA+)T = AA+ (see [1], for example). If A is a square nonsingular matrix, then A+ = A−1. Thus, the Moore–Penrose inversion generalizes ordinary matrix inversion. The idea of matrix generalized inverse was first introduced in 1920 by E. Moore [2], and was later redis- covered by R. Penrose [3]. The Moore–Penrose inverses have numerous applications in least-squares problems, regressive analysis, linear programming and etc. For more information on the generalized inverses see [1] and its extensive bibliography. In this work we consider the problem of the Moore–Penrose inversion of square upper bidiagonal matrices 2 3 d1 b1 6 d2 b2 0 7 6 7 6 .. .. 7 A = 6 . 7 : (1) 6 7 4 0 dn−1 bn−1 5 dn A natural question arises: what caused an interest in pseudoinversion of such matrices ? The reason is as follows.
    [Show full text]
  • Minimum-Residual Methods for Sparse Least-Squares Using Golub-Kahan Bidiagonalization a Dissertation Submitted to the Institute
    MINIMUM-RESIDUAL METHODS FOR SPARSE LEAST-SQUARES USING GOLUB-KAHAN BIDIAGONALIZATION A DISSERTATION SUBMITTED TO THE INSTITUTE FOR COMPUTATIONAL AND MATHEMATICAL ENGINEERING AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY David Chin-lung Fong December 2011 © 2011 by Chin Lung Fong. All Rights Reserved. Re-distributed by Stanford University under license with the author. This dissertation is online at: http://purl.stanford.edu/sd504kj0427 ii I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Michael Saunders, Primary Adviser I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Margot Gerritsen I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Walter Murray Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice Provost Graduate Education This signature page was generated electronically upon submission of this dissertation in electronic format. An original signed hard copy of the signature page is on file in University Archives. iii iv ABSTRACT For 30 years, LSQR and its theoretical equivalent CGLS have been the standard iterative solvers for large rectangular systems Ax = b and least-squares problems min Ax b . They are analytically equivalent − to symmetric CG on the normal equation ATAx = ATb, and they reduce r monotonically, where r = b Ax is the k-th residual vector.
    [Show full text]
  • Restarted Lanczos Bidiagonalization for the SVD in Slepc
    Scalable Library for Eigenvalue Problem Computations SLEPc Technical Report STR-8 Available at http://slepc.upv.es Restarted Lanczos Bidiagonalization for the SVD in SLEPc V. Hern´andez J. E. Rom´an A. Tom´as Last update: June, 2007 (slepc 2.3.3) Previous updates: { About SLEPc Technical Reports: These reports are part of the documentation of slepc, the Scalable Library for Eigenvalue Problem Computations. They are intended to complement the Users Guide by providing technical details that normal users typically do not need to know but may be of interest for more advanced users. Restarted Lanczos Bidiagonalization for the SVD in SLEPc STR-8 Contents 1 Introduction 2 2 Description of the Method3 2.1 Lanczos Bidiagonalization...............................3 2.2 Dealing with Loss of Orthogonality..........................6 2.3 Restarted Bidiagonalization..............................8 2.4 Available Implementations............................... 11 3 The slepc Implementation 12 3.1 The Algorithm..................................... 12 3.2 User Options...................................... 13 3.3 Known Issues and Applicability............................ 14 1 Introduction The singular value decomposition (SVD) of an m × n complex matrix A can be written as A = UΣV ∗; (1) ∗ where U = [u1; : : : ; um] is an m × m unitary matrix (U U = I), V = [v1; : : : ; vn] is an n × n unitary matrix (V ∗V = I), and Σ is an m × n diagonal matrix with nonnegative real diagonal entries Σii = σi for i = 1;:::; minfm; ng. If A is real, U and V are real and orthogonal. The vectors ui are called the left singular vectors, the vi are the right singular vectors, and the σi are the singular values. In this report, we will assume without loss of generality that m ≥ n.
    [Show full text]
  • Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods1
    Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods1 Richard Barrett2, Michael Berry3, Tony F. Chan4, James Demmel5, June M. Donato6, Jack Dongarra3,2, Victor Eijkhout7, Roldan Pozo8, Charles Romine9, and Henk Van der Vorst10 This document is the electronic version of the 2nd edition of the Templates book, which is available for purchase from the Society for Industrial and Applied Mathematics (http://www.siam.org/books). 1This work was supported in part by DARPA and ARO under contract number DAAL03-91-C-0047, the National Science Foundation Science and Technology Center Cooperative Agreement No. CCR-8809615, the Applied Mathematical Sciences subprogram of the Office of Energy Research, U.S. Department of Energy, under Contract DE-AC05-84OR21400, and the Stichting Nationale Computer Faciliteit (NCF) by Grant CRG 92.03. 2Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37830- 6173. 3Department of Computer Science, University of Tennessee, Knoxville, TN 37996. 4Applied Mathematics Department, University of California, Los Angeles, CA 90024-1555. 5Computer Science Division and Mathematics Department, University of California, Berkeley, CA 94720. 6Science Applications International Corporation, Oak Ridge, TN 37831 7Texas Advanced Computing Center, The University of Texas at Austin, Austin, TX 78758 8National Institute of Standards and Technology, Gaithersburg, MD 9Office of Science and Technology Policy, Executive Office of the President 10Department of Mathematics, Utrecht University, Utrecht, the Netherlands. ii How to Use This Book We have divided this book into five main chapters. Chapter 1 gives the motivation for this book and the use of templates. Chapter 2 describes stationary and nonstationary iterative methods.
    [Show full text]