Computation Efficient Coded Linear Transform

Computation Efficient Coded Linear Transform

Computation Efficient Coded Linear Transform Sinong Wang1, Jiashang Liu1, Ness Shroff1;2 Pengyu Yang3 1Department of Electrical and Computer Engineering, The Ohio State University 2Department of Computer Science and Engineering, The Ohio State University 3Department of Mathematics, The Ohio State University Abstract computing frameworks such as Hadoop [Dean and Ghe- mawat, 2008] and Spark [Zaharia et al., 2010] to increase In large-scale distributed linear transform prob- the learning speed. lems, coded computation plays an important role Classical approaches of distributed linear transforms rely to reduce the delay caused by slow machines. on dividing the input matrix A equally among all available However, existing coded schemes could end up worker nodes, and the master node has to collect the results destroying the significant sparsity that exists in from all workers to output y. As a result, a major perfor- large-scale machine learning problems, and in mance bottleneck is the latency incurred in waiting for a turn increase the computational delay. In this few slow or faulty processors – called “stragglers” to finish paper, we propose a coded computation strategy, their tasks [Dean et al., 2012, Dean and Barroso, 2013]. Re- referred to as diagonal code, that achieves the opti- cently, forward error correction and other coding techniques mum recovery threshold and the optimum compu- have shown to be effective in dealing with the stragglers tation load. Furthermore, by leveraging the ideas in distributed computation tasks [Dutta et al., 2016, Lee from random proposal graph theory, we design et al., 2017a,b, Tandon et al., 2017, Yu et al., 2017, Karakus a random code that achieves a constant compu- et al., 2017, Yang et al., 2017, Wang et al.]. By exploiting tation load, which significantly outperforms the coding redundancy, the vector y is recoverable even if all existing best known result. We apply our schemes workers have not finished their computations, thus reducing to the distributed gradient descent problem and the delay caused by straggler nodes. For example, con- demonstrate the advantage of the approach over sider a distributed system with 3 worker nodes, the coding current fastest coded schemes. scheme first splits the matrix A into two submatrices, i.e., A = [A1; A2]. Then each worker computes A1x, A2x and (A1 + A2)x. The master node can compute Ax as soon 1 Introduction as any 2 out of the 3 workers finish, and can overcome one straggler (slow worker). In this paper, we consider a distributed linear transformation problem, where we aim to compute y = Ax from input In a general setting with m workers, the input matrix A is r t matrix r t and vector t. This problem is evenly divided into n (n m) submatrices Ai R n × A R × x R ≤ 2 the key building2 block in machine2 learning and signal pro- along the row side. Each worker computes a partially coded cessing problems, and has been used in a large variety of linear transform and returns the result to the master node. application areas. Optimization-based training algorithms Given a computation strategy, the recovery threshold is de- such as gradient descent in regression and classification fined as the minimum number of workers that the master problems and backpropagation algorithms in deep neural needs to wait for in order to compute Ax. The above MDS networks, require the computation of large linear transforms coded scheme is shown to achieve a recovery threshold of high-dimensional data. It is also the critical step in the Θ(m) [Lee et al., 2017a]. An improved scheme proposed dimensionality reduction techniques such as principal com- in [Dutta et al., 2016], referred to as the short dot code, can ponent analysis and linear discriminant analysis. Many such offer a larger recovery threshold Θ(m(1+)) but with a con- applications have large-scale datasets and massive compu- stant reduction of the computation load by imposing some tational tasks that force practitioners to adopt distributed sparsity of the encoded submatrices. More recently, the work in [Yu et al., 2017] designs a type of polynomial code, which achieves the information-theoretical optimal recovery nd Proceedings of the 22 International Conference on Artificial In- threshold n. However, many problems in machine learn- telligence and Statistics (AISTATS) 2019, Naha, Okinawa, Japan. ing exhibit both extremely large-scale and sparse targeting PMLR: Volume 89. Copyright 2019 by the author(s). Computation Efficient Coded Linear Transform Table 1: Comparison of Existing Schemes in Coded Computation. MDS Short-dot Polynomial s-MDS Sparse s-diagonal (d ; d )-cross Scheme 1 2 code code code code code code code Recovery threshold Θ(m) Θ(m(1 + )) n n* Θ(n)* n n* Computation load=m O(n) O(n(1 )) n Θ(log(n))* Θ(log(n))* O(s) Θ(1)* − * The result holds with high probability, i.e., 1 e−cn. − data, i.e., nnz(A) rt. The key question that arises in to 107. Our observation is that the final job completion time this scenario is: is coding really an efficient way to mitigate of the polynomial code is significantly increased compared the straggler in the distributed sparse linear transformation to that of the uncoded scheme. The main reason is that the problem? increased density of input matrix leads to increased compu- tation time. In the Figure 1(b), we generate a 105 random 1.1 Motivation: Coded Computation Overhead square Bernoulli matrix with different densities p. We plot the ratio of the average local computation time between the To answer the aforementioned question, we use the PageR- polynomial code and the uncoded scheme versus the matrix ank problem [Page et al., 1999] and existing polynomial density p. It can be observed that this ratio is generally large code [Yu et al., 2017] as an example. This problem aims and equal to the order of O(n), when the matrix is sparse. to measure the importance score of the nodes on a graph, Therefore, inspired by this phenomenon, we propose a new which is typically solved by the following power-iteration, metric, named as computation load, which is defined as total number of submatrices the local works access (for- xt = cr + (1 c)Axt 1: (1) − − mal definition can be seen in Section 2). For example, the where c = 0:15 is a constant and A is the graph adjacency polynomial code achieves optimal recovery threshold but a matrix. In practical applications such as web searching, the large computation load of mn. In certain suboptimal codes graph adjacent matrix A is extremely large and sparse. The such as short-dot code, they achieve slightly lower computa- naive way to solve (1) is to partition A into n equal blocks tion load but still on the order of O(mn). Specifically, we A n and store them in the memory of several workers. are interested in the following key problem: can we find a f igi=1 In each iteration, the master node broadcasts the xt into all coded linear transformation scheme that achieves optimal workers, each worker computes a partial linear transform recovery threshold and low computation load? Aixt and sends it back to the master node. Then the master node collects all the partial results and updates the vector. 1.2 Main Contribution The polynomial code [Yu et al., 2017] works as follows: in each iteration, the kth worker essentially computes In this paper, we provide a fundamental limit of the mini- n mum computation load: given n partitions and s stragglers, ~ i y~k = Akx Aixk x; (2) , i=1 the minimum computation load of any coded computation X scheme is n(s + 1). We then design a novel scheme, we and xk is a given integer. One can observe that, due to the call s-diagonal code, that achieves both the optimum recov- sparsity of matrix A and matrices additions, the density of ery threshold and the minimum computation load. We also the coded submatrices Ak will increase at most n times, and exploit the diagonal structure to design a hybrid decoding ~ the time of sublinear transform Akx will increase roughly algorithm between peeling decoding and Gaussian elimi- O(n) times of the simple uncoded one. nation that achieves the nearly linear decoding time of the 70 18 output dimension O(r). uncoded scheme n=30 60 polynomial code 16 n=20 We further show that, under random code designs, the com- n=10 50 14 putation time can be reduced even lower to a constant 40 12 with high probability. Specifically, we define a perfor- 30 10 mance metric probabilistic recovery threshold that allows Frequency 20 8 the coded computation scheme to provide the decodabil- 10 ratio of computation time 6 ity with high probability. Based on this metric, there exist 0 4 several schemes, i.e., sparse MDS code [Lee et al., 2017b] 0.2 0.4 0.6 0.8 2 3 4 5 −3 Computation and communication time (s) density p x 10 and sparse code [Wang et al.] achieving the optimal prob- Figure 1: Measured local computation time per worker. abilistic recovery threshold but a small computation load Θ(n log(n)). In this work, we construct a new (d ; d )- In the Figure 1(a), we show measurements on local com- 1 2 cross code with optimal probabilistic recovery threshold putation and communication time required for m = 20 but constant computation load Θ(n). The comparison of workers to operate the polynomial code and naive uncoded existing and our results are listed in TABLE 1. distributed scheme for the above problem with dimension roughly equal to 106 and number of nonzero elements equal The theoretical analysis to show the optimality of the re- Sinong Wang1, Jiashang Liu1, Ness Shroff1;2 Pengyu Yang3 covery threshold is based on a determinant analysis of a Data matrix Data encoding m n A1 random matrix in R × .

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us