A Multi-Platform Evaluation of the Randomized CX Low-Rank Matrix Factorization in Spark

A Multi-Platform Evaluation of the Randomized CX Low-Rank Matrix Factorization in Spark

A multi-platform evaluation of the randomized CX low-rank matrix factorization in Spark Alex Gittens∗, Jey Kottalamy, Jiyan Yangz, Michael F. Ringenburgx, Jatin Chhugani{, Evan Racahk, x Mohitdeep Singh∗∗, Yushu Yaok, Curt Fischeryy, Oliver Ruebelzz, Benjamin Bowenyy, Norman G. Lewis , Michael W. Mahoney∗, Venkat Krishnamurthyx, Prabhatk ∗ICSI and Department of Statistics, UC Berkeley; Emails: [email protected], [email protected] yBerkeley Institute for Data Science and EECS, UC Berkeley; Email: [email protected] zICME, Stanford University; Email: [email protected] xCray Inc.; Emails: [email protected], [email protected] {HiPerform Inc.; Email: [email protected] kNERSC Division, Lawrence Berkeley National Laboratory; Emails: [email protected], [email protected], [email protected] ∗∗Georgia Institute of Technology; Email: [email protected] yyLife Sciences Division, Lawrence Berkeley National Laboratory; Emails: crfi[email protected], [email protected] zzComputational Research Division, Lawrence Berkeley National Laboratory; Email: [email protected] x Institute of Biological Chemistry, Washington State University; Email: [email protected] Abstract—We investigate the performance and scalability interpretable low-rank factorization by selecting a small of the randomized CX low-rank matrix factorization and number of columns/rows from the original data matrix. demonstrate its applicability through the analysis of a 1TB Described in more detail in Section II, these low-rank mass spectrometry imaging (MSI) dataset, using Apache Spark on an Amazon EC2 cluster, a Cray XC40 system, and an approximation methods are popular in small- and medium- experimental Cray cluster. We implemented this factorization scale machine learning and scientific data analysis applica- both as a parallelized C implementation with hand-tuned tions for exploratory data analysis and for providing compact optimizations and in Scala using the Apache Spark high- and interpretable representations of complex matrix-based level cluster computing framework. We obtained consistent data, but their implementation at scale remains a challenge. performance across the three platforms: using Spark we were able to process the 1TB size dataset in under 30 minutes In this paper, we address the following research questions: with 960 cores on all systems, with the fastest times obtained • Can we successfully apply low rank matrix factoriza- on the experimental Cray cluster. In comparison, the C tion methods (such as CX) to a TB-scale scientific implementation was 21X faster on the Amazon EC2 system, dataset? due to careful cache optimizations, bandwidth-friendly access • Can we implement CX in a contemporary data analytics of matrices and vector computation using SIMD units. We report these results and their implications on the hardware and framework such as Spark? software issues arising in supporting data-centric workloads in • What is the performance gap between a highly tuned parallel and distributed environments. C, and a Spark-based CX implementation? Keywords-matrix factorization; data analytics; high perfor- • How well does a Spark-based CX implementation scale mance computing on modern HPC and data-center hardware platforms? We start with a description of matrix factorization algo- I. INTRODUCTION rithms in Section II, followed by single node and multi- Matrix algorithms are increasingly important in many node implementation details in Section III. We review the large-scale data analysis applications. Essentially, the reason experimental setup for our performance tests in Section IV, is that matrices (i.e., sets of vectors in Euclidean spaces) followed by results and discussion in Section V. provide a convenient mathematical structure with which to II. LOW-RANK MATRIX FACTORIZATION METHODS model data arising in a broad range of applications In particular, the low-rank approximation to a data matrix Given an m×n data matrix A, low-rank matrix factoriza- A that is provided by performing a truncated SVD (singular tion methods aim to find two smaller matrices whose product value decomposition)—or PCA (principal component anal- is a good approximation to A. That is, they aim to find ysis) or CX/CUR decompositions—is a very complicated matrices Y and Z such that object compared with what is conveniently supported by A ≈ Y × Z ; (1) traditional database operations [1]. Recall that PCA finds m×n m×k k×n mutually orthogonal directions that maximize the variance where Y × Z is a rank-k approximation to the original captured by the factorization, and CX/CUR provides an matrix A. Low-rank matrix factorization methods are an important topic in linear algebra and numerical analysis, and Algorithm 1 RANDOMIZEDSVD Algorithm they find use in a variety of scientific fields and scientific Input: A 2 Rm×n, number of power iterations q ≥ 1, computing as well as in machine learning and data analysis target rank r > 0, slack ` ≥ 0, and let k = r + `. applications such as pattern recognition and personalized Output: UΣV T ≈ THINSVD(A; r). recommendation. n×k 1: Initialize B 2 R by sampling Bij ∼ N (0; 1). Depending on the application, various low-rank factoriza- 2: for q times do tion techniques are of interest. Popular choices include the 3: B MULTIPLYGRAMIAN(A; B) singular value decomposition [2], principal component anal- 4: (B; ) THINQR(B) ysis [3], rank-revealing QR factorization [4], nonnegative 5: end for matrix factorization [5], and CUR/CX decompositions [6]. 6: Let Q be the first r columns of B. In this work, we consider using the SVD and CX de- 7: Let C = MULTIPLY(A; Q). compositions for scalable and interpretable data analysis; 8: Compute (U; Σ; V~ T ) = THINSVD(C). in the remainder of this section, we briefly describe these 9: Let V = QV~ . decompositions. For an arbitrary matrix A, denote by ai its j i-th row, a its j-th column and aij its (i; j)-th element. Throughout, we assume A has size m × n and rank r. truncated SVD with rank k using traditional deterministic A. SVD and PCA methods, the running time complexity is O(mnk), and The singular value decomposition (SVD) is the factoriza- O(k) passes over the dataset are needed. This becomes tion of A 2 Rm×n into the product of three matrices UΣV T prohibitively expensive when dealing with datasets of even 6 4 where U 2 Rm×r and V 2 Rn×r have orthonormal columns moderately-large size, e.g., m = 10 , n = 10 and and Σ 2 Rr×r is a diagonal matrix with positive real entries. k = 20. To address these and related issues, recent work The columns of U and V are called left and right singular in Randomized Linear Algebra (RLA) has focused on using vectors and the diagonal entries of Σ are called singular randomized approximation to perform scalable linear algebra values. For notational convenience, we assume the singular computations for large-scale data problems. For an overview values are sorted such that σ1 ≥ · · · ≥ σr ≥ 0. of the RLA area, see [7]; for a review of using RLA methods The SVD is of central interest because it provides the for low-rank matrix approximation, see [8]; and for a review “best” low-rank matrix approximation with respect to any of the theory underlying implementing RLA methods in unitarily invariant matrix norm. In particular, for any target parallel/distributed environments, see [9]. rank k ≤ r, the SVD provides the minimizer of the Here, we will use an algorithm introduced in [10], [11] optimization problem that uses a random projection to construct a rank-k ap- proximation to A which approximates A nearly as well as min kA − A~kF ; (2) rank(A~)=k Ak does. We refer the readers to [7], [8] for more details. O(mn log k) 2 Importantly, the algorithm runs in time, and the where the Frobenius norm k · kF is defined as kXkF = algorithm needs only a constant number of passes over the Pm Pn 2 i=1 j=1 Xij. Specifically, the solution to (2) is given by data matrix. These properties becomes extremely desirable T the truncated SVD, i.e., Ak = UkΣkVk , where the columns in many large-scale data analytics. This algorithm, which of Uk and Vk are the top k singular vectors, i.e., the first we refer to as RANDOMIZEDSVD, is summarized in Al- k columns of U and V , respectively, and Σk is a diagonal gorithm 1. (Algorithm 1 calls MULTIPLYGRAMIAN, which matrix containing the top-k singular values. is summarized in Algorithm 2, as well as three algorithms, Principal component analysis (PCA) and SVD are closely MULTIPLY,THINQR, and THINSVD, which are standard related. PCA aims to convert the original features into a set in numerical linear algebra [2].) The running time cost of orthogonal directions called principal components that for RANDOMIZEDSVD is dominated by the matrix-matrix capture most of the variance in the data points. The PCA multiplication, which involve passing over the entire data decomposition of A is given by the SVD of the matrix matrix, appearing in Step 3 and Step 7 of Algorithm 1. These formed by centering each column of A (i.e., removing steps can be parallelized, and hence RANDOMIZEDSVD is the mean of each column). When low-rank methods are amenable to distributed computing. appropriate, the number of principal components needed to preserve most of the information in A is far less than the C. CX/CUR decompositions number of original features, and thus the goal of dimension reduction is achieved. In addition to developing improved algorithms for PCA/SVD and related problems, work in RLA has also fo- B. Randomized SVD cused on so-called CX/CUR decompositions [6], [12]. As a The computation of the SVD (and thus of PCA) for a motivation, observe that singular vectors are eigenvectors of data matrix A is expensive [2]. For example, to compute the the Gram matrix AT A, and thus they are linear combinations Algorithm 2 MULTIPLYGRAMIAN Algorithm i-th leverage score is defined as Input: A 2 Rm×n, B 2 Rn×k.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us