Parallel Algorithms Through Approximation: B-Edge Cover

Parallel Algorithms Through Approximation: B-Edge Cover

Parallel Algorithms through Approximation: b-Edge Cover Arif Khan Alex Pothen S M Ferdous Data Analytics Computer Science Computer Science Pacific Northwest National Laboratory Purdue University Purdue University Richland, Washington, USA West Lafayette, Indiana, USA West Lafayette, Indiana, USA [email protected] [email protected] [email protected] Abstract—We describe a paradigm for designing paral- vertex v. Subject to this restriction on the subset of edges, lel algorithms via approximation, and illustrate it on the we minimize the sum of the weights of the edges in C. b-EDGE COVER problem. A b-EDGE COVER of minimum weight The closely related maximum weight b-MATCHING problem in a graph is a subset C of its edges such that at least a specified number b(v) of edges in C is incident on each vertex chooses a subset of at most b(v) edges incident on each v, and the sum of the edge weights in C is minimum. The vertex v to include in the matching, and then we maximize GREEDY algorithm and a variant, the LSE algorithm, pro- the sum of the weights of the matched edges. We will vide 3=2-approximation guarantees in the worst-case for this describe the complementary relationship between these two problem, but these algorithms have limited parallelism. Hence problems for approximation algorithms. we design two new 2-approximation algorithms with greater concurrency. The MCE algorithm reduces the computation of a The paradigm of designing approximation algorithms for 0 b-EDGE COVER to that of finding a b -MATCHING, by exploiting parallelism has been considered in the theoretical computer the relationship between these subgraphs in an approximation science community for vertex and set cover problems by context. The S-LSE is derived from the LSE algorithm using Khuller, Vishkin and Young [14], and for facility location, static edge weights rather than dynamically computing effective max cut, set cover, and low stretch spanning trees, by edge weights. This relaxation gives S-LSE a worse approxima- tion guarantee but makes it more amenable to parallelization. Blelloch, Tangwongsan and coauthors, e.g., [3], [22]. The We prove that both the MCE and S-LSE algorithms compute idea underlying many of these parallel algorithms is that the same b-EDGE COVER with at most twice the weight of the a greedy algorithm chooses a most cost-effective element minimum weight edge cover. In practice, the 2-approximation in each iteration, and by allowing a small slack, a factor of and 3=2-approximation algorithms compute edge covers of (1+), more elements can be selected at the cost of a slightly weight within 10% the optimal. We implement three of the approximation algorithms, MCE, LSE, and S-LSE, on shared worse approximation ratio. These are algorithms with poly- memory multi-core machines, including an Intel Xeon and an logarithmic depth, and although some of them have linear IBM Power8 machine with 8 TB memory. The MCE algorithm work requirements, there are no parallel implementations is the fastest of these by an order of magnitude or more. It that we know of. computes an edge cover in a graph with billions of edges in 20 The approximation paradigm for parallelism has been seconds using two hundred threads on the IBM Power8. We also show that the parallel depth and work can be bounded previously employed for MATCHING and b-MATCHING for the SUITOR and b-SUITOR algorithms when edge weights problems. The GREEDY algorithm for MATCHING does are random. not have much concurrency, and Preis [19] developed the Keywords-b-EDGE COVER; b-MATCHING; Approximation Al- Locally Dominant edge algorithm, which was implemented gorithms; Parallel Algorithms. for shared-memory parallel machines by Manne and Bis- seling [16]. Manne and Halappanavar [17] developed the I. INTRODUCTION SUITOR algorithm, which has even more concurrency at We consider a paradigm for designing practical parallel the expense of annulled proposals, and this algorithm was algorithms for certain graph problems through approxi- extended to the b-SUITOR algorithm for b-MATCHINGS on mation algorithms. Algorithms for solving these problems both shared-memory and distributed-memory computers by exactly are impractical for massive graphs, and possess our group [12], [13]. Azad et al. [1] have applied a 2=3 − - little concurrency. Often approximation algorithms can solve approximation algorithm for weighted perfect matchings such problems faster in serial, but to take advantage of in bipartite graphs to compute good orderings for sparse parallelism, new algorithms that possess high degrees of Gaussian elimination. concurrency need to be designed. The minimum weight b-EDGE COVER problem is rich in We illustrate this paradigm by considering the minimum the space of approximation algorithms, and we consider four weight b-EDGE COVER problem, where the objective is to such algorithms here. A GREEDY algorithm and a variant, choose a subset of edges C in the graph such that at least the LSE (locally subdominant edge) algorithm that we have a specified number b(v) of edges in C is incident on each designed earlier, have 3=2-approximation ratios. Since these algorithms do not have much parallelism, we describe new of b0(v) = deg(v) − b(v), where deg(v) is the degree of 2-approximation algorithms that are more concurrent. We a vertex v.) There is a complementary relationship between implement the new approximation algorithms on a multicore these problems in the context of optimal matchings and edge shared-memory multiprocessor, and compare their perfor- covers, and we extend it to approximate solutions, discuss mance with the earlier 3=2-approximation algorithms for this the condition under which this relationship holds, and use it problem. Thus we trade off increased parallel performance to design the MCE algorithm. for a slightly higher worst-case approximation ratio. We We design parallel versions of the LSE, S-LSE and MCE show that in practice nearly minimum weight edge covers algorithms, and compare the run time performances of these are obtained. In the next few paragraphs, we add more detail algorithms on Intel Xeon and IBM Power8 multiprocessors. to these statements. We show that the MCE algorithm is the fastest among these The GREEDY algorithm for the b-EDGE COVER problem algorithms both on serial and shared memory multi-threaded requires the effective weight of each edge, which is the processors, outperforming others by at least an order of weight of the edge divided the number of its endpoints that magnitude. The MCE algorithm employs the b-SUITOR al- do not yet have their b(:) values satisfied by edges included gorithm for computing a b0-MATCHING; the latter algorithm in the cover. Thus initially this is half the weight of an edge scales to 16K cores of a distributed memory machine [13]. (u; v); it could then equal the weight of the edge, or become We show here that b-EDGE COVERS in a graph with billions infinite when the b(v) values of one or both of its endpoints of edges can be computed in seconds with a Terabyte-scale are satisfied. At each iteration, an edge with the minimum shared memory machine using hundreds of threads. value of the effective weight is added to the cover, and The rest of this paper is organized as follows. We provide the weights of neighboring edges are updated. The order background on the b-EDGE COVER problem and its relation in which the edges are added to the cover and the dynamic to matchings in Section II. In Section III, we discuss our updates of the edge weights cause the algorithm to be not proposed 2-approximation algorithms S-LSE and MCE. amenable to parallelization. Parallel implementations of these algorithms are described Earlier, we have proposed a 3=2-approximation algorithm in Section IV. The worst-case approximation guarantee of 2 called the LSE algorithm [11], which relaxes the order for the MCE algorithm, and that the MCE and the LSE in which edges are added to the cover, making it more algorithms compute the same edge cover, are proved in concurrent. An edge (u; v) is locally sub-dominant if it Section V. The parallel depth and work of the SUITOR and has the minimum weight among all edges incident on its b-SUITOR algorithms on which the MCE algorithm depends, endpoints u and v. The LSE algorithm adds a locally sub- are included in Section VI. Our experiments and results are dominant edge to the cover, deletes this edge from the described in Section VII, and we conclude in Section VIII. graph, updates the effective weights of neighboring edges, and updates the b(:) values of its endpoints. The algorithm II. BACKGROUND iterates until all b(:) values are satisfied. Unfortunately, the The well-known k-nearest neighbor graph construc- dynamic weight update in the LSE algorithm makes the tion to represent noisy and dense data is related to parallel implementation inefficient. If we work with the the b-EDGE COVER problem. The formulation as a static edge weights instead of the dynamic effective weights, b-EDGE COVER problem is more general, since instead of we obtain a 2-approximation guarantee, while significantly using a uniform value of b, we can choose b(v) to depend on improving the run time performance and scalability. We call each vertex v. Furthermore, as the work in this paper shows, this algorithm S-LSE, i.e., LSE with static edge weights and this construction creates redundant edges which may be no effective weight update, and this is a new contribution removed to obtain a sparser graph while satisfying the b(v) in this paper. The S-LSE algorithm iteratively adds a set of constraints.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us