Chapter 7 the Singular Value Decomposition (SVD)

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 7 the Singular Value Decomposition (SVD) Chapter 7 The Singular Value Decomposition (SVD) ✬ ✩ 1 The SVD produces orthonormal basesofv’s andu’s for the four fundamental subspaces. 2 Using those bases,A becomes a diagonal matrixΣ and Av i =σ iui :σ i = singular value. T 1 3 The two-bases diagonalizationA=UΣV often has more information thanA=XΛX − . T T T T 4UΣV separatesA into rank-1 matricesσ 1u1v + +σ rurv .σ 1u1v is the largest! ✫ 1 ··· r 1 ✪ 7.1 Bases and Matrices in the SVD The Singular Value Decomposition is a highlight of linear algebra.A is anymbyn matrix, 1 square or rectangular. Its rank isr. We will diagonalize thisA, but not byX − AX. The eigenvectors inX have three big problems: They are usually not orthogonal, there are not always enough eigenvectors, andAx=λx requiresA to be a square matrix. The singular vectorsofA solve all those problems in a perfect way. Let me describe what we want from the SVD : the right bases for the four subspaces. Then I will write about the steps tofind those bases in order of importance. The price we pay is to have two sets of singular vectors,u’s andv’s. Theu’s are in Rm and thev’s are inR n. They will be the columns of anmbym matrixU and annby n matrixV . I willfirst describe the SVD in terms of those basis vectors. Then I can also describe the SVD in terms of the orthogonal matricesU andV. (using vectors) Theu’s andv’s give bases for the four fundamental subspaces : u1,...,u r is an orthonormal basis for the column space T ur+1,...,u m is an orthonormal basis for the left nullspaceN(A ) v1,...,v r is an orthonormal basis for the row space vr+1,...,v n is an orthonormal basis for the nullspaceN(A). 381 382 Chapter 7. The Singular Value Decomposition (SVD) More than just orthogonality, these basis vectors diagonalize the matrixA: “A is diagonalized” Av1 =σ 1u1 Av2 =σ 2u2 ... Avr =σ rur (1) Those singular valuesσ 1 toσ r will be positive numbers :σ i is the length ofAv i. Theσ’s go into a diagonal matrix that is otherwise zero. That matrix isΣ. (using matrices) Since theu’s are orthonormal, the matrixU with thoser columns has U TU=I. Since thev’s are orthonormal, the matrixV hasV TV=I. Then the equations Avi =σ iui tell us column by column thatAV r =U rΣr: (mbyn)(nbyr) σ1 AV =U Σ A v v = u u . (2) r r r 1 ·· r 1 ·· r · (mbyr)(rbyr) · σr This is the heart of the SVD, but there is more. Thosev’s andu’s account for the row space and column space ofA. We haven r morev’s andm r moreu’s, from the − − nullspaceN(A) and the left nullspaceN(A T). They are automatically orthogonal to the firstv’s andu’s (because the whole nullspaces are orthogonal). We now include all the v’s andu’s inV andU, so these matrices become square. We still haveAV=UΣ. (mbyn)(nbyn) σ1 Av equalsUΣ A v1 v r v n = u1 u r u m · . (3) ·· ·· ·· ·· · σr (mbym)(mbyn) The newΣ ism byn. It is just ther byr matrix in equation (2) withm r extra zero − rows andn r new zero columns. The real change is in the shapes ofU andV. Those − are square orthogonal matrices. So AV=UΣ can becomeA=UΣV T. This is the T Singular Value Decomposition. I can multiply columnsu iσi fromUΣ by rows ofV : T T T SVD A=UΣV =u 1σ1v + +u rσrv . (4) 1 ··· r Equation (2) was a “reduced SVD” with bases for the row space and column space. Equation (3) is the full SVD with nullspaces included. They both split upA into the same T r matricesu iσivi of rank one: column times row. 2 T T We will see that eachσ i is an eigenvalue ofA A and also AA . When we put the singular values in descending order,σ σ ...σ >0, the splitting in equation (4) 1 ≥ 2 ≥ r gives ther rank-one pieces ofA in order of importance. This is crucial. T 1 Example 1 When isΛ=UΣV (singular values) the same asXΛX − (eigenvalues) ? Solution A needs orthonormal eigenvectors to allowX=U=V.A also needs eigenvaluesλ 0 ifΛ=Σ. SoA mustbea positive semidefinite(or definite) symmetric ≥ 1 T T matrix. Only then willA=XΛX − which is alsoQΛQ coincide withA=UΣV . 7.1. Bases and Matrices in the SVD 383 Example 2 IfA=xy T (rank1) with unit vectorsx andy, what is the SVD ofA? T Solution The reduced SVD in (2) is exactlyxy , with rankr=1. It hasu 1 =x and v1 =y andσ 1 = 1. For the full SVD, completeu 1 =x to an orthonormal basis ofu’s, and completev 1 =y to an orthonormal basis ofv’s. No newσ’s, onlyσ 1 = 1. Proof of the SVD We need to show how those amazingu’s andv’s can be constructed. Thev’s will be orthonormal eigenvectors ofA TA. This must be true because we are aiming for ATA=(UΣV T)T(UΣV T) =VΣ TU TUΣV T =VΣ TΣV T. (5) On the right you see the eigenvector matrixV for the symmetric positive (semi) definite matrixA TA. And(Σ TΣ) must be the eigenvalue matrix of(A TA): Eachσ 2 isλ(A TA)! NowAv i =σ iui tells us the unit vectorsu 1 tou r. This is the key equation (1). The essential point—the whole reason that the SVD succeeds—is that those unit vectors u1 tou r are automatically orthogonal to each other (because thev’s are orthogonal): � � � � T T T 2 T Avi Avj vi A Avj σj T Key stepu i uj = = = vi vj = zero. (6) σi σj σiσj σiσj Thev’s are eigenvectors ofA TA (symmetric). They are orthogonal and now theu’s are also orthogonal. Actually thoseu’s will be eigenvectors of AA T. Finally we complete thev’s andu’s tonv’s andmu’s with any orthonormal bases for the nullspacesN(A) andN(A T). We have foundV andΣ andU inA=UΣV T. An Example of the SVD Here is an example to show the computation of three matrices inA=UΣV T. � � 3 0 Example 3 Find the matricesU,Σ,V forA= . The rank isr=2. 4 5 With rank2, thisA has positive singular valuesσ 1 andσ 2. We will see thatσ 1 is larger T T thanλ max = 5, andσ 2 is smaller� thanλ�min = 3. Begin with�A A and� AA : 25 20 9 12 ATA= AAT = 20 25 12 41 2 2 Those have the same trace (50) and the same eigenvaluesσ 1 = 45 andσ 2 = 5. The square roots areσ 1 = √45 andσ 2 = √5. Thenσ 1σ2 = 15 and this is the determinant ofA. A key step is tofind the eigenvectors ofA TA (with eigenvalues45 and5): � �� � � �� �� � � � 25 20 1 1 25 20 1 1 =45 =5 20 25 1 1 20 25 −1 −1 Thenv 1 andv 2 are those (orthogonal!) eigenvectors rescaled to length1. 384 Chapter 7. The Singular Value Decomposition (SVD) � � � � 1 1 1 1 Right singular vectorsv 1 = v2 = − .u i = left singular vectors. √2 1 √2 1 Now computeAv andAv which will beσ u = √45u andσ u = √5u : 1 2 � � 1 1 � 1 � 2 2 2 3 1 1 1 Av1 = = √45 =σ 1 u1 √2 3 √10 3 � � � � 1 3 1 3 Av2 = − = √5 − =σ 2 u2 √2 1 √10 1 The division by √10 makesu 1 andu 2 orthonormal. Thenσ 1 = √45 andσ 2 = √5 as expected. The Singular Value Decomposition isA=UΣV T : � � � � � � 1 1 3 √45 1 1 1 U= − Σ= V= − . (7) √10 3 1 √5 √2 1 1 U andV contain orthonormal bases for the column space and the row space (both spaces are justR 2). The real achievement is that those two bases diagonalizeA: AV equalsUΣ. Then the matrixU TAV=Σ is diagonal. The matrixA splits into a combination of two rank-one matrices, columns times rows : � � � � � � √ √ T T 45 1 1 5 3 3 3 0 σ1u1v +σ 2u2v = + − = = A. 1 2 √ 3 3 √ 1 1 4 5 20 20 − An Extreme Matrix Here is a larger example, when theu’s and thev’s are just columns of the identity matrix. So the computations are easy, but keep your eye on the order of the columns. The matrix A is badly lopsided (strictly triangular). All its eigenvalues are zero.AA T is not close to ATA. The matricesU andV will be permutations thatfix these problems properly. 0 1 0 0 eigenvaluesλ=0,0,0,0 all zero ! 0 0 2 0 only one eigenvector (1,0,0,0) A= 0 0 0 3 singular valuesσ=3,2,1 0 0 0 0 singular vectors are columns ofI 7.1. Bases and Matrices in the SVD 385 We always start withA TA and AAT. They are diagonal (with easyv’s andu’s): 0000 1000 0100 0400 ATA= AAT = 0 040 0 090 0 0 09 0 0 00 T T 2 2 2 Their eigenvectors (u’s for AA andv’s forA A) go in decreasing orderσ 1 >σ 2 >σ 3 of the eigenvalues. These eigenvaluesσ 2 = 9,4,1 are not zero! 0 010 3 0 0 01 0100 2 0 010 U= Σ= V= 1000 1 0100 0 0 01 0 1000 T Thosefirst columnsu 1 andv 1 have1’s in positions3 and4. Thenu 1σ1v1 picks out the biggest numberA 34 = 3 in the original matrixA. The three rank-one matrices in the SVD come exactly from the numbers3,2,1 inA.
Recommended publications
  • 18.06 Linear Algebra, Problem Set 2 Solutions
    18.06 Problem Set 2 Solution Total: 100 points Section 2.5. Problem 24: Use Gauss-Jordan elimination on [U I] to find the upper triangular −1 U : 2 3 2 3 2 3 1 a b 1 0 0 −1 4 5 4 5 4 5 UU = I 0 1 c x1 x2 x3 = 0 1 0 : 0 0 1 0 0 1 −1 Solution (4 points): Row reduce [U I] to get [I U ] as follows (here Ri stands for the ith row): 2 3 2 3 1 a b 1 0 0 (R1 = R1 − aR2) 1 0 b − ac 1 −a 0 4 5 4 5 0 1 c 0 1 0 −! (R2 = R2 − cR2) 0 1 0 0 1 −c 0 0 1 0 0 1 0 0 1 0 0 1 ( ) 2 3 R1 = R1 − (b − ac)R3 1 0 0 1 −a ac − b −! 40 1 0 0 1 −c 5 : 0 0 1 0 0 1 Section 2.5. Problem 40: (Recommended) A is a 4 by 4 matrix with 1's on the diagonal and −1 −a; −b; −c on the diagonal above. Find A for this bidiagonal matrix. −1 Solution (12 points): Row reduce [A I] to get [I A ] as follows (here Ri stands for the ith row): 2 3 1 −a 0 0 1 0 0 0 6 7 60 1 −b 0 0 1 0 07 4 5 0 0 1 −c 0 0 1 0 0 0 0 1 0 0 0 1 2 3 (R1 = R1 + aR2) 1 0 −ab 0 1 a 0 0 6 7 (R2 = R2 + bR2) 60 1 0 −bc 0 1 b 07 −! 4 5 (R3 = R3 + cR4) 0 0 1 0 0 0 1 c 0 0 0 1 0 0 0 1 2 3 (R1 = R1 + abR3) 1 0 0 0 1 a ab abc (R = R + bcR ) 60 1 0 0 0 1 b bc 7 −! 2 2 4 6 7 : 40 0 1 0 0 0 1 c 5 0 0 0 1 0 0 0 1 Alternatively, write A = I − N.
    [Show full text]
  • [Math.RA] 19 Jun 2003 Two Linear Transformations Each Tridiagonal with Respect to an Eigenbasis of the Ot
    Two linear transformations each tridiagonal with respect to an eigenbasis of the other; comments on the parameter array∗ Paul Terwilliger Abstract Let K denote a field. Let d denote a nonnegative integer and consider a sequence ∗ K p = (θi,θi , i = 0...d; ϕj , φj, j = 1...d) consisting of scalars taken from . We call p ∗ ∗ a parameter array whenever: (PA1) θi 6= θj, θi 6= θj if i 6= j, (0 ≤ i, j ≤ d); (PA2) i−1 θh−θd−h ∗ ∗ ϕi 6= 0, φi 6= 0 (1 ≤ i ≤ d); (PA3) ϕi = φ1 + (θ − θ )(θi−1 − θ ) h=0 θ0−θd i 0 d i−1 θh−θd−h ∗ ∗ (1 ≤ i ≤ d); (PA4) φi = ϕ1 + (Pθ − θ )(θd−i+1 − θ0) (1 ≤ i ≤ d); h=0 θ0−θd i 0 −1 ∗ ∗ ∗ ∗ −1 (PA5) (θi−2 − θi+1)(θi−1 − θi) P, (θi−2 − θi+1)(θi−1 − θi ) are equal and independent of i for 2 ≤ i ≤ d − 1. In [13] we showed the parameter arrays are in bijection with the isomorphism classes of Leonard systems. Using this bijection we obtain the following two characterizations of parameter arrays. Assume p satisfies PA1, PA2. Let ∗ ∗ A, B, A ,B denote the matrices in Matd+1(K) which have entries Aii = θi, Bii = θd−i, ∗ ∗ ∗ ∗ ∗ ∗ Aii = θi , Bii = θi (0 ≤ i ≤ d), Ai,i−1 = 1, Bi,i−1 = 1, Ai−1,i = ϕi, Bi−1,i = φi (1 ≤ i ≤ d), and all other entries 0. We show the following are equivalent: (i) p satisfies −1 PA3–PA5; (ii) there exists an invertible G ∈ Matd+1(K) such that G AG = B and G−1A∗G = B∗; (iii) for 0 ≤ i ≤ d the polynomial i ∗ ∗ ∗ ∗ ∗ ∗ (λ − θ0)(λ − θ1) · · · (λ − θn−1)(θi − θ0)(θi − θ1) · · · (θi − θn−1) ϕ1ϕ2 · · · ϕ nX=0 n is a scalar multiple of the polynomial i ∗ ∗ ∗ ∗ ∗ ∗ (λ − θd)(λ − θd−1) · · · (λ − θd−n+1)(θ − θ )(θ − θ ) · · · (θ − θ ) i 0 i 1 i n−1 .
    [Show full text]
  • Singular Value Decomposition (SVD)
    San José State University Math 253: Mathematical Methods for Data Visualization Lecture 5: Singular Value Decomposition (SVD) Dr. Guangliang Chen Outline • Matrix SVD Singular Value Decomposition (SVD) Introduction We have seen that symmetric matrices are always (orthogonally) diagonalizable. That is, for any symmetric matrix A ∈ Rn×n, there exist an orthogonal matrix Q = [q1 ... qn] and a diagonal matrix Λ = diag(λ1, . , λn), both real and square, such that A = QΛQT . We have pointed out that λi’s are the eigenvalues of A and qi’s the corresponding eigenvectors (which are orthogonal to each other and have unit norm). Thus, such a factorization is called the eigendecomposition of A, also called the spectral decomposition of A. What about general rectangular matrices? Dr. Guangliang Chen | Mathematics & Statistics, San José State University3/22 Singular Value Decomposition (SVD) Existence of the SVD for general matrices Theorem: For any matrix X ∈ Rn×d, there exist two orthogonal matrices U ∈ Rn×n, V ∈ Rd×d and a nonnegative, “diagonal” matrix Σ ∈ Rn×d (of the same size as X) such that T Xn×d = Un×nΣn×dVd×d. Remark. This is called the Singular Value Decomposition (SVD) of X: • The diagonals of Σ are called the singular values of X (often sorted in decreasing order). • The columns of U are called the left singular vectors of X. • The columns of V are called the right singular vectors of X. Dr. Guangliang Chen | Mathematics & Statistics, San José State University4/22 Singular Value Decomposition (SVD) * * b * b (n>d) b b b * b = * * = b b b * (n<d) * b * * b b Dr.
    [Show full text]
  • Self-Interlacing Polynomials Ii: Matrices with Self-Interlacing Spectrum
    SELF-INTERLACING POLYNOMIALS II: MATRICES WITH SELF-INTERLACING SPECTRUM MIKHAIL TYAGLOV Abstract. An n × n matrix is said to have a self-interlacing spectrum if its eigenvalues λk, k = 1; : : : ; n, are distributed as follows n−1 λ1 > −λ2 > λ3 > ··· > (−1) λn > 0: A method for constructing sign definite matrices with self-interlacing spectra from totally nonnegative ones is presented. We apply this method to bidiagonal and tridiagonal matrices. In particular, we generalize a result by O. Holtz on the spectrum of real symmetric anti-bidiagonal matrices with positive nonzero entries. 1. Introduction In [5] there were introduced the so-called self-interlacing polynomials. A polynomial p(z) is called self- interlacing if all its roots are real, semple and interlacing the roots of the polynomial p(−z). It is easy to see that if λk, k = 1; : : : ; n, are the roots of a self-interlacing polynomial, then the are distributed as follows n−1 (1.1) λ1 > −λ2 > λ3 > ··· > (−1) λn > 0; or n (1.2) − λ1 > λ2 > −λ3 > ··· > (−1) λn > 0: The polynomials whose roots are distributed as in (1.1) (resp. in (1.2)) are called self-interlacing of kind I (resp. of kind II). It is clear that a polynomial p(z) is self-interlacing of kind I if, and only if, the polynomial p(−z) is self-interlacing of kind II. Thus, it is enough to study self-interlacing of kind I, since all the results for self-interlacing of kind II will be obtained automatically. Definition 1.1. An n × n matrix is said to possess a self-interlacing spectrum if its eigenvalues λk, k = 1; : : : ; n, are real, simple, are distributed as in (1.1).
    [Show full text]
  • Eigen Values and Vectors Matrices and Eigen Vectors
    EIGEN VALUES AND VECTORS MATRICES AND EIGEN VECTORS 2 3 1 11 × = [2 1] [3] [ 5 ] 2 3 3 12 3 × = = 4 × [2 1] [2] [ 8 ] [2] • Scale 3 6 2 × = [2] [4] 2 3 6 24 6 × = = 4 × [2 1] [4] [16] [4] 2 EIGEN VECTOR - PROPERTIES • Eigen vectors can only be found for square matrices • Not every square matrix has eigen vectors. • Given an n x n matrix that does have eigenvectors, there are n of them for example, given a 3 x 3 matrix, there are 3 eigenvectors. • Even if we scale the vector by some amount, we still get the same multiple 3 EIGEN VECTOR - PROPERTIES • Even if we scale the vector by some amount, we still get the same multiple • Because all you’re doing is making it longer, not changing its direction. • All the eigenvectors of a matrix are perpendicular or orthogonal. • This means you can express the data in terms of these perpendicular eigenvectors. • Also, when we find eigenvectors we usually normalize them to length one. 4 EIGEN VALUES - PROPERTIES • Eigenvalues are closely related to eigenvectors. • These scale the eigenvectors • eigenvalues and eigenvectors always come in pairs. 2 3 6 24 6 × = = 4 × [2 1] [4] [16] [4] 5 SPECTRAL THEOREM Theorem: If A ∈ ℝm×n is symmetric matrix (meaning AT = A), then, there exist real numbers (the eigenvalues) λ1, …, λn and orthogonal, non-zero real vectors ϕ1, ϕ2, …, ϕn (the eigenvectors) such that for each i = 1,2,…, n : Aϕi = λiϕi 6 EXAMPLE 30 28 A = [28 30] From spectral theorem: Aϕ = λϕ 7 EXAMPLE 30 28 A = [28 30] From spectral theorem: Aϕ = λϕ ⟹ Aϕ − λIϕ = 0 (A − λI)ϕ = 0 30 − λ 28 = 0 ⟹ λ = 58 and
    [Show full text]
  • Singular Value Decomposition (SVD) 2 1.1 Singular Vectors
    Contents 1 Singular Value Decomposition (SVD) 2 1.1 Singular Vectors . .3 1.2 Singular Value Decomposition (SVD) . .7 1.3 Best Rank k Approximations . .8 1.4 Power Method for Computing the Singular Value Decomposition . 11 1.5 Applications of Singular Value Decomposition . 16 1.5.1 Principal Component Analysis . 16 1.5.2 Clustering a Mixture of Spherical Gaussians . 16 1.5.3 An Application of SVD to a Discrete Optimization Problem . 22 1.5.4 SVD as a Compression Algorithm . 24 1.5.5 Spectral Decomposition . 24 1.5.6 Singular Vectors and ranking documents . 25 1.6 Bibliographic Notes . 27 1.7 Exercises . 28 1 1 Singular Value Decomposition (SVD) The singular value decomposition of a matrix A is the factorization of A into the product of three matrices A = UDV T where the columns of U and V are orthonormal and the matrix D is diagonal with positive real entries. The SVD is useful in many tasks. Here we mention some examples. First, in many applications, the data matrix A is close to a matrix of low rank and it is useful to find a low rank matrix which is a good approximation to the data matrix . We will show that from the singular value decomposition of A, we can get the matrix B of rank k which best approximates A; in fact we can do this for every k. Also, singular value decomposition is defined for all matrices (rectangular or square) unlike the more commonly used spectral decomposition in Linear Algebra. The reader familiar with eigenvectors and eigenvalues (we do not assume familiarity here) will also realize that we need conditions on the matrix to ensure orthogonality of eigenvectors.
    [Show full text]
  • Accurate Singular Values of Bidiagonal Matrices
    d d Accurate Singular Values of Bidiagonal Matrices (Appeared in the SIAM J. Sci. Stat. Comput., v. 11, n. 5, pp. 873-912, 1990) James Demmel W. Kahan Courant Institute Computer Science Division 251 Mercer Str. University of California New York, NY 10012 Berkeley, CA 94720 Abstract Computing the singular values of a bidiagonal matrix is the ®nal phase of the standard algo- rithm for the singular value decomposition of a general matrix. We present a new algorithm which computes all the singular values of a bidiagonal matrix to high relative accuracy indepen- dent of their magnitudes. In contrast, the standard algorithm for bidiagonal matrices may com- pute sm all singular values with no relative accuracy at all. Numerical experiments show that the new algorithm is comparable in speed to the standard algorithm, and frequently faster. Keywords: singular value decomposition, bidiagonal matrix, QR iteration AMS(MOS) subject classi®cations: 65F20, 65G05, 65F35 1. Introduction The standard algorithm for computing the singular value decomposition (SVD ) of a gen- eral real matrix A has two phases [7]: = T 1) Compute orthogonal matrices P 11and Q such that B PAQ 11is in bidiagonal form, i.e. has nonzero entries only on its diagonal and ®rst superdiagonal. Σ= T 2) Compute orthogonal matrices P 22and Q such that PBQ 22is diagonal and nonnega- σ Σ tive. The diagonal entries i of are the singular values of A. We will take them to be σ≥σ = sorted in decreasing order: ii+112. The columns of Q QQ are the right singular vec- = tors,andthecolumnsofP PP12are the left singular vectors.
    [Show full text]
  • A Singularly Valuable Decomposition: the SVD of a Matrix Dan Kalman
    A Singularly Valuable Decomposition: The SVD of a Matrix Dan Kalman Dan Kalman is an assistant professor at American University in Washington, DC. From 1985 to 1993 he worked as an applied mathematician in the aerospace industry. It was during that period that he first learned about the SVD and its applications. He is very happy to be teaching again and is avidly following all the discussions and presentations about the teaching of linear algebra. Every teacher of linear algebra should be familiar with the matrix singular value deco~??positiolz(or SVD). It has interesting and attractive algebraic properties, and conveys important geometrical and theoretical insights about linear transformations. The close connection between the SVD and the well-known theo1-j~of diagonalization for sylnmetric matrices makes the topic immediately accessible to linear algebra teachers and, indeed, a natural extension of what these teachers already know. At the same time, the SVD has fundamental importance in several different applications of linear algebra. Gilbert Strang was aware of these facts when he introduced the SVD in his now classical text [22, p. 1421, obselving that "it is not nearly as famous as it should be." Golub and Van Loan ascribe a central significance to the SVD in their defini- tive explication of numerical matrix methods [8, p, xivl, stating that "perhaps the most recurring theme in the book is the practical and theoretical value" of the SVD. Additional evidence of the SVD's significance is its central role in a number of re- cent papers in :Matlgenzatics ivlagazine and the Atnericalz Mathematical ilironthly; for example, [2, 3, 17, 231.
    [Show full text]
  • Design and Evaluation of Tridiagonal Solvers for Vector and Parallel Computers Universitat Politècnica De Catalunya
    Design and Evaluation of Tridiagonal Solvers for Vector and Parallel Computers Author: Josep Lluis Larriba Pey Advisor: Juan José Navarro Guerrero Barcelona, January 1995 UPC Universitat Politècnica de Catalunya Departament d'Arquitectura de Computadors UNIVERSITAT POLITÈCNICA DE CATALU NYA B L I O T E C A X - L I B R I S Tesi doctoral presentada per Josep Lluis Larriba Pey per tal d'aconseguir el grau de Doctor en Informàtica per la Universitat Politècnica de Catalunya UNIVERSITAT POLITÈCNICA DE CATAl U>'YA ADA'UNÍIEÏRACÍÓ ;;//..;,3UMPf';S AO\n¿.-v i S -i i::-.« c:¿fCM or,re: iïhcc'a a la pàgina .rf.S# a;; 3 b el r;ú¡; Barcelona, 6 N L'ENCARREGAT DEL REGISTRE. Barcelona, de de 1995 To my wife Marta To my parents Elvira and José Luis "A journey of a thousand miles must begin with a single step" Lao-Tse Acknowledgements I would like to thank my parents, José Luis and Elvira, for their unconditional help and love to me. I will always be indebted to you. I also want to thank my wife, Marta, for being the sparkle in my life, for her support and for understanding my (good) moods. I thank Juanjo, my advisor, for his friendship, patience and good advice. Also, I want to thank Àngel Jorba for his unconditional collaboration, work and support in some of the contributions of this work. I thank the members of "Comissió de Doctorat del DAG" for their comments and specially Miguel Valero for his suggestions on the topics of chapter 4. I thank Mateo Valero for his friendship and always good advice.
    [Show full text]
  • Limited Memory Block Krylov Subspace Optimization for Computing Dominant Singular Value Decompositions
    Limited Memory Block Krylov Subspace Optimization for Computing Dominant Singular Value Decompositions Xin Liu∗ Zaiwen Weny Yin Zhangz March 22, 2012 Abstract In many data-intensive applications, the use of principal component analysis (PCA) and other related techniques is ubiquitous for dimension reduction, data mining or other transformational purposes. Such transformations often require efficiently, reliably and accurately computing dominant singular value de- compositions (SVDs) of large unstructured matrices. In this paper, we propose and study a subspace optimization technique to significantly accelerate the classic simultaneous iteration method. We analyze the convergence of the proposed algorithm, and numerically compare it with several state-of-the-art SVD solvers under the MATLAB environment. Extensive computational results show that on a wide range of large unstructured matrices, the proposed algorithm can often provide improved efficiency or robustness over existing algorithms. Keywords. subspace optimization, dominant singular value decomposition, Krylov subspace, eigen- value decomposition 1 Introduction Singular value decomposition (SVD) is a fundamental and enormously useful tool in matrix computations, such as determining the pseudo-inverse, the range or null space, or the rank of a matrix, solving regular or total least squares data fitting problems, or computing low-rank approximations to a matrix, just to mention a few. The need for computing SVDs also frequently arises from diverse fields in statistics, signal processing, data mining or compression, and from various dimension-reduction models of large-scale dynamic systems. Usually, instead of acquiring all the singular values and vectors of a matrix, it suffices to compute a set of dominant (i.e., the largest) singular values and their corresponding singular vectors in order to obtain the most valuable and relevant information about the underlying dataset or system.
    [Show full text]
  • Computing the Moore-Penrose Inverse for Bidiagonal Matrices
    УДК 519.65 Yu. Hakopian DOI: https://doi.org/10.18523/2617-70802201911-23 COMPUTING THE MOORE-PENROSE INVERSE FOR BIDIAGONAL MATRICES The Moore-Penrose inverse is the most popular type of matrix generalized inverses which has many applications both in matrix theory and numerical linear algebra. It is well known that the Moore-Penrose inverse can be found via singular value decomposition. In this regard, there is the most effective algo- rithm which consists of two stages. In the first stage, through the use of the Householder reflections, an initial matrix is reduced to the upper bidiagonal form (the Golub-Kahan bidiagonalization algorithm). The second stage is known in scientific literature as the Golub-Reinsch algorithm. This is an itera- tive procedure which with the help of the Givens rotations generates a sequence of bidiagonal matrices converging to a diagonal form. This allows to obtain an iterative approximation to the singular value decomposition of the bidiagonal matrix. The principal intention of the present paper is to develop a method which can be considered as an alternative to the Golub-Reinsch iterative algorithm. Realizing the approach proposed in the study, the following two main results have been achieved. First, we obtain explicit expressions for the entries of the Moore-Penrose inverse of bidigonal matrices. Secondly, based on the closed form formulas, we get a finite recursive numerical algorithm of optimal computational complexity. Thus, we can compute the Moore-Penrose inverse of bidiagonal matrices without using the singular value decomposition. Keywords: Moor-Penrose inverse, bidiagonal matrix, inversion formula, finite recursive algorithm.
    [Show full text]
  • Basic Matrices
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Linear Algebra and its Applications 373 (2003) 143–151 www.elsevier.com/locate/laa Basic matricesୋ Miroslav Fiedler Institute of Computer Science, Czech Academy of Sciences, 182 07 Prague 8, Czech Republic Received 30 April 2002; accepted 13 November 2002 Submitted by B. Shader Abstract We define a basic matrix as a square matrix which has both subdiagonal and superdiagonal rank at most one. We investigate, partly using additional restrictions, the relationship of basic matrices to factorizations. Special basic matrices are also mentioned. © 2003 Published by Elsevier Inc. AMS classification: 15A23; 15A33 Keywords: Structure rank; Tridiagonal matrix; Oscillatory matrix; Factorization; Orthogonal matrix 1. Introduction and preliminaries For m × n matrices, structure (cf. [2]) will mean any nonvoid subset of M × N, M ={1,...,m}, N ={1,...,n}. Given a matrix A (over a field, or a ring) and a structure, the structure rank of A is the maximum rank of any submatrix of A all entries of which are contained in the structure. The most important examples of structure ranks for square n × n matrices are the subdiagonal rank (S ={(i, k); n i>k 1}), the superdiagonal rank (S = {(i, k); 1 i<k n}), the off-diagonal rank (S ={(i, k); i/= k,1 i, k n}), the block-subdiagonal rank, etc. All these ranks enjoy the following property: Theorem 1.1 [2, Theorems 2 and 3]. If a nonsingular matrix has subdiagonal (su- perdiagonal, off-diagonal, block-subdiagonal, etc.) rank k, then the inverse also has subdiagonal (...)rank k.
    [Show full text]