RANK FACTORIZATION and MOORE-PENROSE INVERSE Gradimir V

Total Page:16

File Type:pdf, Size:1020Kb

RANK FACTORIZATION and MOORE-PENROSE INVERSE Gradimir V RANK FACTORIZATION AND MOORE-PENROSE INVERSE Gradimir V. Milovanovic´ and Predrag S. Stanimirovic´ Abstract. In this paper we develop a few representations of the Moore- Penrose inverse, based on full-rank factorizations of matrices. These rep- resentations we divide into the two different classes: methods which arise from the known block decompositions and determinantal representation. In particular cases we obtain several known results. 1. Introduction Cm×n The set of m × n complex matrices of rank r is denoted by r = {X ∈ Cm×n |r : rank(X) = r}. With A and A|r we denote the first r columns of A and the first r rows of A, respectively. The identity matrix of the order k is denoted by Ik, and O denotes the zero block of an appropriate dimensions. We use the following useful expression for the Moore-Penrose generalized inverse A†, based on the full-rank factorization A = PQ of A [1-2]: A† = Q†P † = Q∗(QQ∗)−1(P ∗P )−1P ∗ = Q∗(P ∗AQ∗)−1P ∗. We restate main known block decompositions [7], [16-18]. For a given Cm×n matrix A ∈ r there exist regular matrices R, G, permutation matrices E, F and unitary matrices U, V , such that: 1991 Mathematics Subject Classification. 15A09, 15A24. Typeset by AMS-TEX 1 2 G. V. Milovanovi´cand P. S. Stanimirovi´c Ir O B O (T1) RAG = O O = N1, (T2) RAG = O O = N2, I K I O (T ) RAF = r = N , (T ) EAG = r = N , 3 O O 3 4 K O 4 Ir O Ir O (T5) UAG = O O = N1, (T6) RAV = O O = N1, B O B K (T7) UAV = O O = N2, (T8) UAF = O O = N5, B O (T ) EAV = = N , 9 K O 6 A11 A12 (T10) EAF = = N7, where rank(A11)=rank(A). A21 A22 (T11) Transformation of similarity for square matrices [11]: −1 ∗ −1 Ir K ∗ −1 T1 T2 RAR = RAF F R = O O F R = O O . The block form (T10) can be expressed in two different ways: A11 A11T (T10a) EAF = , where the multipliers S and T satisfy SA11 SA11T −1 −1 T = A11 A12, S = A21A11 (see [8]); A11 A12 A11 A12 (T10b) EAF = = −1 (see [9]). A21 A22 A21 A21A A12 11 Block representations of the Moore-Penrose inverse is investigated in [8], [11], [15–19]. In [16], [18] the results are obtained by solving the equations (1)–(4). In [15], [8] the corresponding representations are obtained using the block decompositions (T10a) and (T10b) and implied full-rank factorizations. Also, in [19] is introduced block representation of the Moore-Penrose in- verse, based on A† = A∗T A∗, where T ∈ A∗AA∗{1}. Block decomposition (T11) is investigated in [11], but only for square matrices and the group inverse. The notion determinantal representation of the Moore-Penrose inverse of A means representation of elements of A† in terms of minors of A. Deter- minantal representation of the Moore-Penrose inverse is examined in [1–2], [4–6], [12–14]. For the sake of completeness, we restate here several nota- tions and the main result. For an m × n matrix A let α = {α1,... ,αr} and β = {β1,... ,βr} be subsets of {1,... ,m} and {1,... ,n}, respectively. RANK FACTORIZATION AND MOORE-PENROSE INVERSE 3 α ... α A 1 r Aα A Then β1 ... βr = | β | denotes the minor of determined by the rows α indexed by α and the columns indexed by β, and Aβ represents the corre- α sponding submatrix. Also, the algebraic complement of Aβ is defined by ∂ α ... α i α ... α α1 ... αp−1 αp+1 ... αr | Aα |=A 1 p−1 p+1 r =(−1)p+qA . β ij β1 ... βq 1 j βq+1 ... βr β1 ... βq−1 βq+1 ... βr ∂aij − Adjoint matrix of a square matrix B is denoted by adj(B), and its deter- minant by |B|. Determinantal representation of full rank matrices is introuced in [1], and for full-rank matrices in [4–6]. In [12–14] is introduced an elegant derivation for determinantal representation of the Moore-Penrose inverse, using a full- rank factorization and known results for full-rank matrices. Main result of these papers is: Proposition 1.1. The (i, j)th element of the Moore-Penrose inverse G = Cm×n (gij ) of A ∈ r is given by α ∂ α Aβ | Aβ | α:j∈α ; β:i∈β ∂aji gij = P γ γ . | A δ | | Aδ | γ,δ P In this paper we investigate two different representations of the Moore- Penrose inverse. The first class of representations is a continuation of the papers [8] and [15]. In other words, from the presented block factorizations of matrices find corresponding full-rank decompositions A = PQ, and then ap- ply A† = Q∗(P ∗AQ∗)−1P ∗. In the second representation, A† is represented in terms of minors of the matrix A. In this paper we describe an elegant proof of the well-known determinantal representation of the Moore-Penrose inverse. Main advantages of described block representations are their simply derivation and computation and possibility of natural generalization. Deter- minantal representation of the Moore-Penrose inverse can be implemented only for small dimensions of matrices (n ≤ 10). 2. Block representation Cm×n Theorem 2.1. The Moore-Penrose inverse of a given matrix A ∈ r can be represented as follows, where block representations (Gi) correspond to the block decompositions (Ti), i ∈ {1,... , 9, 10a, 10b, 11}: ∗ −1 ∗ † −1 ∗ −1|r −1 ∗ −1|r (G1) A = G |r R A G |r R , 4 G. V. Milovanovi´cand P. S. Stanimirovi´c ∗ −1 ∗ † −1 ∗ −1|r −1 ∗ −1|r (G2) A = G |r R B A G |r R B , ∗ −1 ∗ I |r I |r (G ) A† = F r R−1 AF r R−1 , 3 K∗ K∗ −1 † −1 ∗ ∗ −1 ∗ ∗ (G4) A = G |r [ Ir, K ] EA G |r [ Ir, K ] E, −1 † −1 ∗ −1 ∗ (G5) A = G |r U|rA G |r U|r, ∗ −1 ∗ † |r −1|r |r −1|r (G6) A = V R AV R , † |r ∗ |r −1 ∗ (G7) A = V B U|rAV B U|r, −1 B∗ B∗ (G ) A† = F U AF U , 8 K∗ |r K∗ |r † |r ∗ ∗ |r −1 ∗ ∗ (G9) A = V [ B , K ] EAV [ B , K ] E, −1 Ir Ir (G ) A† = F A∗ [ I , S∗ ] EAF A∗ [ I , S∗ ] E 10a T ∗ 11 r T ∗ 11 r I = F r (I + T T ∗)−1A−1(I + S∗S)−1 [ I , S∗ ] E, T ∗ r 11 r r 1 A∗ A∗ − (G ) A† =F 11 (A∗ )−1[A∗ , A∗ ]EAF 11 (A∗ )−1[A∗ , A∗ ]E 10b A∗ 11 11 21 A∗ 11 11 21 12 12 A∗ =F 11 (A A∗ +A A∗ )−1A (A∗ A +A∗ A )−1[A∗ , A∗ ]E, A∗ 11 11 12 12 11 11 11 21 21 11 21 12 ∗ −1 ∗ † ∗ Ir −1|r ∗ Ir −1|r (G11) A =R −1 ∗ R T1 AR −1 ∗ R T1 . T1 T2 T1 T2 Proof. (G1) Starting from (T1), we obtain O −1 Ir −1 −1 Ir O −1 A = R O O G = R O [ Ir, ] G , which implies −1 Ir −1|r O −1 −1 P = R O = R , Q = [ Ir, ] G = G |r. Now, we get ∗ −1 ∗ † ∗ ∗ ∗ −1 ∗ −1 ∗ −1|r −1 ∗ −1|r A = Q (P AQ ) P = G |r R A G |r R . RANK FACTORIZATION AND MOORE-PENROSE INVERSE 5 The other block decompositions can be obtained in a similar way. (G5) Block decomposition (T5) implies O ∗ Ir −1 ∗ Ir O −1 A = U O O G = U O [ Ir, ] G , which means ∗ Ir ∗|r O −1 −1 P = U O = U , Q = [ Ir, ] G = G |r. (G7) It is easy to see that (T7) implies O ∗ B ∗ ∗ B O ∗ A = U O O V = U O [ Ir, ] V . Thus, ∗ B ∗|r O ∗ ∗ P = U O = U B, Q = [ Ir, ] V = V |r, which means ∗ ∗ ∗ |r P = B U|r, Q = V . (G10a) From (T10a) we obtain ∗ A11 A11T ∗ ∗ Ir ∗ A = E F = E A11 [ Ir, T ] F , SA11 SA11T S which implies, for example, the following full rank factorization of A: I P = E∗ r A , Q = [ I , T ] F ∗. S 11 r Now, 1 I , I , − A† = F r A∗ [ I , S∗ ] EAF r A∗ [ I , S∗ ] E. T ∗ 11 r T ∗ 11 r The proof can be completed using I EAF = r A [ I , T ] . S 11 r −1 Remarks 2.1. (i) A convenient method for finding the matrices S, T and A11 , required in (T10a) was introduced in [8], and it was based on the following extended Gauss-Jordan transformation: 6 G. V. Milovanovi´cand P. S. Stanimirovi´c A A I I T A−1 11 12 → 11 . A21 A22 O O O −S (ii) In [3] it was used the following full-rank factorization of A, derived A11 A12 from A = −1 : A21 A21A A12 11 A11 −1 P = , Q = [ Ir, A11 A12 ] . A21 3. Determinantal representation In the following definition are generalized concepts of determinant, alge- braic complement, adjoint matrix and determinantal representation of gen- eralized inverses. (see also [14].) Definition 3.1. Let A be m × n matrix of rank r. (i) The generalized determinant of A, denoted by Nr(A), is equal to α α Nr(A)= Aβ | Aβ | , α,β P (ii) Generalized algebraic complement of A corresponding to aij is † α ∂ α Aij = Aβ | Aβ | . α:j∈α;β:i∈β ∂aji P † (iii) Generalized adjoint matrix of A, denoted by adj (A) is the matrix † whose elements are Aij.
Recommended publications
  • Solving a Low-Rank Factorization Model for Matrix Completion by a Nonlinear Successive Over-Relaxation Algorithm
    SOLVING A LOW-RANK FACTORIZATION MODEL FOR MATRIX COMPLETION BY A NONLINEAR SUCCESSIVE OVER-RELAXATION ALGORITHM ZAIWEN WEN †, WOTAO YIN ‡, AND YIN ZHANG § Abstract. The matrix completion problem is to recover a low-rank matrix from a subset of its entries. The main solution strategy for this problem has been based on nuclear-norm minimization which requires computing singular value decompositions – a task that is increasingly costly as matrix sizes and ranks increase. To improve the capacity of solving large-scale problems, we propose a low-rank factorization model and construct a nonlinear successive over-relaxation (SOR) algorithm that only requires solving a linear least squares problem per iteration. Convergence of this nonlinear SOR algorithm is analyzed. Numerical results show that the algorithm can reliably solve a wide range of problems at a speed at least several times faster than many nuclear-norm minimization algorithms. Key words. Matrix Completion, alternating minimization, nonlinear GS method, nonlinear SOR method AMS subject classifications. 65K05, 90C06, 93C41, 68Q32 1. Introduction. The problem of minimizing the rank of a matrix arises in many applications, for example, control and systems theory, model reduction and minimum order control synthesis [20], recovering shape and motion from image streams [25, 32], data mining and pattern recognitions [6] and machine learning such as latent semantic indexing, collaborative prediction and low-dimensional embedding. In this paper, we consider the Matrix Completion (MC) problem of finding a lowest-rank matrix given a subset of its entries, that is, (1.1) min rank(W ), s.t. Wij = Mij , ∀(i,j) ∈ Ω, W ∈Rm×n where rank(W ) denotes the rank of W , and Mi,j ∈ R are given for (i,j) ∈ Ω ⊂ {(i,j) : 1 ≤ i ≤ m, 1 ≤ j ≤ n}.
    [Show full text]
  • Low-Rank Incremental Methods for Computing Dominant Singular Subspaces
    Low-Rank Incremental Methods for Computing Dominant Singular Subspaces C. G. Baker∗ K. A. Gallivan∗ P. Van Dooren† Abstract This paper describes a generic low-rank incremental method for computing the dominant singular triplets of a matrix via a single pass through the matrix. This work unifies several efforts previously described in the literature. We tie the operation of the proposed method to a particular optimization- based eigensolver. This allows the description of novel methods exploiting multiple passes through the matrix. 1 Introduction Given a matrix A Rm×n,m n, the Singular Value Decomposition (SVD) of A is: ∈ ≥ Σ A = U V T , 0 where U and V are m m and n n orthogonal matrices, respectively, and Σ is a diagonal matrix whose × × elements σ1,...,σn are real, non-negative and non-increasing. The σi are the singular values of A, and the first n columns of U and V are the left and right singular vectors of A, respectively. Given k n, the singular vectors associated with the largest k singular values of A are the rank-k dominant singular≤ vectors. The subspaces associated with these vectors are the rank-k dominant singular subspaces. The components of the SVD are optimal in many respects [1], and these properties result in the occurrence of the SVD in numerous analyses, e.g., principal component analysis, proper orthogonal decomposition, the Karhunen-Loeve transform. These applications of the SVD are frequently used in problems related to, e.g., signal processing, model reduction of dynamical systems, image and face recognition, and information retrieval.
    [Show full text]
  • Block Matrices in Linear Algebra
    PRIMUS Problems, Resources, and Issues in Mathematics Undergraduate Studies ISSN: 1051-1970 (Print) 1935-4053 (Online) Journal homepage: https://www.tandfonline.com/loi/upri20 Block Matrices in Linear Algebra Stephan Ramon Garcia & Roger A. Horn To cite this article: Stephan Ramon Garcia & Roger A. Horn (2020) Block Matrices in Linear Algebra, PRIMUS, 30:3, 285-306, DOI: 10.1080/10511970.2019.1567214 To link to this article: https://doi.org/10.1080/10511970.2019.1567214 Accepted author version posted online: 05 Feb 2019. Published online: 13 May 2019. Submit your article to this journal Article views: 86 View related articles View Crossmark data Full Terms & Conditions of access and use can be found at https://www.tandfonline.com/action/journalInformation?journalCode=upri20 PRIMUS, 30(3): 285–306, 2020 Copyright # Taylor & Francis Group, LLC ISSN: 1051-1970 print / 1935-4053 online DOI: 10.1080/10511970.2019.1567214 Block Matrices in Linear Algebra Stephan Ramon Garcia and Roger A. Horn Abstract: Linear algebra is best done with block matrices. As evidence in sup- port of this thesis, we present numerous examples suitable for classroom presentation. Keywords: Matrix, matrix multiplication, block matrix, Kronecker product, rank, eigenvalues 1. INTRODUCTION This paper is addressed to instructors of a first course in linear algebra, who need not be specialists in the field. We aim to convince the reader that linear algebra is best done with block matrices. In particular, flexible thinking about the process of matrix multiplication can reveal concise proofs of important theorems and expose new results. Viewing linear algebra from a block-matrix perspective gives an instructor access to use- ful techniques, exercises, and examples.
    [Show full text]
  • 1. Matrix Rank
    L. Vandenberghe ECE133B (Spring 2020) 1. Matrix rank • subspaces, dimension, rank • QR factorization with pivoting • properties of matrix rank • low-rank matrices • pseudo-inverse 1.1 Subspace a nonempty subset V of Rm is a subspace if αx + βy 2 V for all vectors x; y 2 V and scalars α; β • all linear combinations of elements of V are in V • V is nonempty and closed under scalar multiplication and vector addition Examples • f0g, Rm • the span of a set S ⊆ Rm: all linear combinations of elements of S span¹Sº = fβ1a1 + ··· + βkak j a1;:::; ak 2 S; β1; : : : ; βk 2 Rg if S = fa1;:::; ang is a finite set, we write span¹Sº = span¹a1;:::; anº (the span of the empty set is defined as f0g) Matrix rank 1.2 Operations on subspaces three common operations on subspaces (V and W are subspaces) • intersection: V\W = fx j x 2 V; x 2 Wg • sum: V + W = fx + y j x 2 V; y 2 Wg if V\W = f0g this is called the direct sum and written as V ⊕ W • orthogonal complement: V? = fx j yT x = 0 for all y 2 Vg the result of each of the three operations is a subspace Matrix rank 1.3 Range of a matrix × T T suppose A is an m n matrix with columns a1;:::; an and rows b1;:::; bm: 2 bT 3 6 :1 7 A = a1 ··· an = 6 : 7 6 7 6 bT 7 4 m 5 Range (column space): the span of the column vectors (a subspace of Rm) range¹Aº = span¹a1;:::; anº n = fx1a1 + ··· + xnan j x 2 R g = fAx j x 2 Rng the range of AT is called the row space of A (a subspace of Rn): T range¹A º = span¹b1;:::; bmº m = fy1b1 + ··· + ymbm j y 2 R g = fAT y j y 2 Rmg Matrix rank 1.4 Nullspace of a matrix × T T suppose
    [Show full text]
  • Structured Low-Rank Matrix Factorization: Global Optimality, Algorithms, and Applications
    JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 1 Structured Low-Rank Matrix Factorization: Global Optimality, Algorithms, and Applications Benjamin D. Haeffele, Rene´ Vidal, Fellow, IEEE Abstract—Recently, convex formulations of low-rank matrix factorization problems have received considerable attention in machine learning. However, such formulations often require solving for a matrix of the size of the data matrix, making it challenging to apply them to large scale datasets. Moreover, in many applications the data can display structures beyond simply being low-rank, e.g., images and videos present complex spatio-temporal structures that are largely ignored by standard low-rank methods. In this paper we study a matrix factorization technique that is suitable for large datasets and captures additional structure in the factors by using a particular form of regularization that includes well-known regularizers such as total variation and the nuclear norm as particular cases. Although the resulting optimization problem is non-convex, we show that if the size of the factors is large enough, under certain conditions, any local minimizer for the factors yields a global minimizer. A few practical algorithms are also provided to solve the matrix factorization problem, and bounds on the distance from a given approximate solution of the optimization problem to the global optimum are derived. Examples in neural calcium imaging video segmentation and hyperspectral compressed recovery show the advantages of our approach on high-dimensional
    [Show full text]
  • Multidimensional Butterfly Factorization
    JID:YACHA AID:1193 /COR [m3L; v1.215; Prn:11/05/2017; 9:36] P.1 (1-22) Appl. Comput. Harmon. Anal. ••• (••••) •••–••• Contents lists available at ScienceDirect Applied and Computational Harmonic Analysis www.elsevier.com/locate/acha Letter to the Editor Multidimensional butterfly factorization Yingzhou Li b,∗, Haizhao Yang c, Lexing Ying a,b a Department of Mathematics, Stanford University, United States b ICME, Stanford University, United States c Department of Mathematics, Duke University, United States a r t i c l e i n f o a b s t r a c t Article history: This paper introduces the multidimensional butterfly factorization as a data-sparse Received 26 September 2015 representation of multidimensional kernel matrices that satisfy the complementary Received in revised form 30 March low-rank property. This factorization approximates such a kernel matrix of size 2017 N × N with a product of O(log N)sparse matrices, each of which contains Accepted 11 April 2017 O(N) nonzero entries. We also propose efficient algorithms for constructing this Available online xxxx Communicated by Joel A. Tropp factorization when either (i) a fast algorithm for applying the kernel matrix and its adjoint is available or (ii) every entry of the kernel matrix can be evaluated MSC: in O(1) operations. For the kernel matrices of multidimensional Fourier integral 44A55 operators, for which the complementary low-rank property is not satisfied due to a 65R10 singularity at the origin, we extend this factorization by combining it with either 65T50 a polar coordinate transformation or a multiscale decomposition of the integration domain to overcome the singularity.
    [Show full text]
  • Notes APPM 5720 — P.G
    Course notes APPM 5720 — P.G. Martinsson January 22, 2016 Matrix factorizations and low rank approximation The first section of the course provides a quick review of basic concepts from linear algebra that we will use frequently. Note that the pace is fast here, and assumes that you have seen these concepts in prior course-work. If not, then additional reading on the side is strongly recommended! 1. NOTATION, ETC n n 1.1. Norms. Let x = [x1; x2; : : : ; xn] denote a vector in R or C . Our default norm for vectors is the Euclidean norm 1=2 0 n 1 X 2 kxk = @ jxjj A : j=1 We will at times also use `p norms 1=p 0 n 1 X p kxkp = @ jxjj A : j=1 Let A denote an m × n matrix. For the most part, we allow A to have complex entries. We define the spectral norm of A via kAxk kAk = sup kAxk = sup : kxk=1 x6=0 kxk We define the Frobenius norm of A via 1=2 0 m n 1 X X 2 kAkF = @ jA(i; j)j A : i=1 j=1 Observe that p kAk ≤ kAkF ≤ min(m; n) kAk: 1.2. Transpose and adjoint. Given an m × n matrix A, the transpose At is the n × m matrix B with entries B(i; j) = A(j; i): The transpose is most commonly used for real matrices. It can also be used for a complex matrix, but more typically, we then use the adjoint, which is the complex conjugate of the transpose A∗ = At: 1.3.
    [Show full text]
  • Direct Methodsmethods Forfor Solvingsolving Linearlinear Systemssystems (3H.)(3H.)
    NumericalNumerical MethodsMethods RafaRafałł ZdunekZdunek DirectDirect methodsmethods forfor solvingsolving linearlinear systemssystems (3h.)(3h.) ((GaussianGaussian eliminationelimination andand matrixmatrix decompositionsdecompositions)) Introduction • Solutions to linear systems • Gaussian elimination, • LU factorization, • Gauss-Jordan elimination, • Pivoting Bibliography [1] A. Bjorck, Numerical Methods for Least Squares Problems, SIAM, Philadelphia, 1996, [2] G. Golub, C. F. Van Loan, Matrix Computations, The John Hopkins University Press, (Third Edition), 1996, [3] J. Stoer R. Bulirsch, Introduction to Numerical Analysis (Second Edition), Springer-Verlag, 1993, [4] C. D. Meyer, Matrix Analysis and Applied Linear Algebra, SIAM, 2000, [5] Ch. Zarowski, An Introduction to Numerical Analysis for Electrical and Computer Engineers, Wiley, 2004, [6] G. Strang, Linear Algebra and Its Applications, Harcourt Brace & Company International Edition, 1998, Applications There are two basic classes of methods for solving a system of linear equations: direct and iterative methods. Direct methods theoretically give an exact solution in a (predictable) finite number of steps. Unfortunately, this does not have to be true in computational approach due to rounding errors: an error made in one step spreads in all following steps. Classical direct methods usually involve a variety of matrix factorization techniques such as the LU, LDU, LUP, Cholesky, QR, EVD, SVD, and GSVD. Direct methods are computationally efficient only for a specific class of problems where the system matrix is sparse and reveals some patterns, e.g. it is a banded matrix. Otherwise, iterative methods are usually more efficient. Solutions to linear systems m: ixrtro ma fingollow fd in thexses een beprations cqua era linem ofste syA Ax= b (1) , MN× M where =A [aij ] ∈ ℑ is a coefficient mtarix, =b []bi ∈ ℑ is a dat vector, and a N MN×( 1+ ) =x []x j ∈ ℑt ed.
    [Show full text]
  • Generalized Inverses and Generalized Connections with Statistics
    Generalized Inverses and Generalized Connections with Statistics Tim Zitzer April 12, 2007 Creative Commons License c 2007 Permission is granted to others to copy, distribute, display and perform the work and make derivative works based upon it only if they give the author or licensor the credits in the manner specified by these and only for noncommercial purposes. Generalized Inverses and Generalized Connections with Statistics Consider an arbitrary system of linear equations with a coefficient matrix A ∈ Mn,n, vector of constants b ∈ Cn, and solution vector x ∈ Cn : Ax = b If matrix A is nonsingular and thus invertible, then we can employ the techniques of matrix inversion and multiplication to find the solution vector x. In other words, the unique solution x = A−1b exists. However, when A is rectangular, dimension m × n or singular, a simple representation of a solution in terms of A is more difficult. There may be none, one, or an infinite number of solutions depending on whether b ∈ C(A) and whether n − rank(A) > 0. We would like to be able to find a matrix (or matrices) G, such that solutions of Ax = b are of the form Gb. Thus, through the work of mathematicians such as Penrose, the generalized inverse matrix was born. In broad terms, a generalized inverse matrix of A is some matrix G such that Gb is a solution to Ax = b. Definitions and Notation In 1920, Moore published the first work on generalized inverses. After his definition was more or less forgotten due to cumbersome notation, an algebraic form of Moore’s definition was given by Penrose in 1955, having explored the theoretical properties in depth.
    [Show full text]
  • On the Complexity of the Block Low-Rank Multifrontal Factorization Patrick Amestoy, Alfredo Buttari, Jean-Yves L’Excellent, Théo Mary
    On the Complexity of the Block Low-Rank Multifrontal Factorization Patrick Amestoy, Alfredo Buttari, Jean-Yves l’Excellent, Théo Mary To cite this version: Patrick Amestoy, Alfredo Buttari, Jean-Yves l’Excellent, Théo Mary. On the Complexity of the Block Low-Rank Multifrontal Factorization. SIAM Journal on Scientific Computing, Society for Industrial and Applied Mathematics, 2017, 39 (4), pp.34. 10.1137/16M1077192. hal-01322230v3 HAL Id: hal-01322230 https://hal.archives-ouvertes.fr/hal-01322230v3 Submitted on 10 Oct 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. ON THE COMPLEXITY OF THE BLOCK LOW-RANK MULTIFRONTAL FACTORIZATION PATRICK AMESTOY∗, ALFREDO BUTTARIy , JEAN-YVES L'EXCELLENTz , AND THEO MARYx Abstract. Matrices coming from elliptic Partial Differential Equations have been shown to have a low- rank property: well defined off-diagonal blocks of their Schur complements can be approximated by low-rank products and this property can be efficiently exploited in multifrontal solvers to provide a substantial reduction of their complexity. Among the possible low-rank formats, the Block Low-Rank format (BLR) is easy to use in a general purpose multifrontal solver and has been shown to provide significant gains compared to full-rank on practical applications.
    [Show full text]
  • Matrix Factorizations and Low Rank Approximation
    CHAPTER 1 Matrix factorizations and low rank approximation We will start with a review of basic concepts from linear algebra that will be used frequently. Note that the pace is fast here, and assumes that the reader has some level of familiarity with the material. The one concept that is perhaps less well known is the material on the interpolative decomposition (ID) in Section 1.5. 1.1. Notation, etc n n 1.1.1. Norms. Let x = [x1; x2; : : : ; xn] denote a vector in R or C . Our default norm for vectors is the Euclidean norm 1=2 0 n 1 X 2 kxk = @ jxjj A : j=1 We will at times also use `p norms 1=p 0 n 1 X p kxkp = @ jxjj A : j=1 Let A denote an m × n matrix. For the most part, we allow A to have complex entries. We define the spectral norm of A via kAxk kAk = sup kAxk = sup : kxk=1 x6=0 kxk We define the Frobenius norm of A via 1=2 0 m n 1 X X 2 kAkF = @ jA(i; j)j A : i=1 j=1 Observe that the spectral norm and the Frobenius norms satisfy the inequalities p kAk ≤ kAkF ≤ min(m; n) kAk: 1.1.2. Transpose and adjoint. Given an m × n matrix A, the transpose At is the n × m matrix B with entries B(i; j) = A(j; i): The transpose is most commonly used for real matrices. It can also be used for a complex matrix, but more typically, we then use the adjoint, which is the complex conjugate of the transpose A∗ = At: 1 1.1.3.
    [Show full text]
  • Low Rank Approximation Lecture 1
    Low Rank Approximation Lecture 1 Daniel Kressner Chair for Numerical Algorithms and HPC Institute of Mathematics, EPFL [email protected] 1 Organizational aspects I Lectures: Tuesday 8-10, MA A110. First: September 25, Last: December 18. I Exercises: Tuesday 8-10, MA A110. First: September 25, Last: December 18. I Exam: Miniproject + oral exam. I Webpage: https://anchp.epfl.ch/lowrank. I [email protected], [email protected] 2 From http://www.niemanlab.org ... his [Aleksandr Kogan’s] message went on to confirm that his approach was indeed similar to SVD or other matrix factorization meth- ods, like in the Netflix Prize competition, and the Kosinki-Stillwell- Graepel Facebook model. Dimensionality reduction of Facebook data was the core of his model. 3 Rank and basic properties For field F, let A 2 F m×n. Then rank(A) := dim(range(A)): For simplicity, F = R throughout the lecture and often m ≥ n. Lemma Let A 2 Rm×n. Then 1. rank(AT ) = rank(A); 2. rank(PAQ) = rank(A) for invertible matrices P 2 Rm×m, Q 2 Rn×n; 3. rank(AB) ≤ minfrank(A); rank(B)g for any matrix B 2 Rn×p. A11 A12 m ×n 4. rank = rank(A11) + rank(A22) for A11 2 R 1 1 , 0 A22 m ×n m ×n A12 2 R 1 2 ,A22 2 R 2 2 . Proof: See Linear Algebra 1 / Exercises. 4 Rank and matrix factorizations m Let B = fb1;:::; br g ⊂ R with r = rank(A) be basis of range(A). Then each of the columns of A = a1; a2;:::; an can be expressed as linear combination of B: 2 3 ci1 ;:::; 6 .
    [Show full text]