Combinatorial Algorithms for Subset Sum Problems

Total Page:16

File Type:pdf, Size:1020Kb

Combinatorial Algorithms for Subset Sum Problems Combinatorial Algorithms for Subset Sum Problems Dissertation an der Fakult¨atf¨urMathematik der Ruhr-Universit¨atBochum vorgelegt von Ilya Ozerov Erstgutachter: Prof. Dr. Alexander May Zweitgutachter: Prof. Dr. Gregor Leander Tag der m¨undlichen Pr¨ufung:05.02.2016 Contents 1 Introduction 1 2 Consistency Problem 5 2.1 High-Level Idea . .6 2.1.1 NN Problem . .6 2.1.2 zeroAND Problem . .9 2.1.3 A Joint Solution . 12 2.2 Consistency Problem . 15 2.2.1 Preliminaries . 15 2.2.2 Problem and Algorithm . 18 2.2.3 Analysis . 21 2.3 Weight Match Problem . 25 2.3.1 General Case . 25 2.3.2 Random Weight Match Problem . 27 2.4 Nearest Neighbor Problem . 28 2.4.1 Analysis . 29 3 Subset Sum Problem 33 3.1 Generalized Problem . 34 3.1.1 Brute Force . 34 3.1.2 Meet-in-the-Middle . 35 3.2 Random Subset Sum Problem . 36 3.2.1 Tools . 37 3.3 Known Results . 41 3.3.1 Meet-in-the-Middle Resivited . 41 3.3.2 Classical Representations . 42 3.4 Consistent Representations . 51 3.4.1 Group Weight Match Problem . 52 3.4.2 Algorithm . 54 4 Binary Subset Sum Problem 59 4.1 Known Results . 60 4.1.1 Meet-in-the-Middle . 60 4.1.2 Representations I . 60 4.1.3 Representations II . 64 4.2 Novel Results . 68 iv CONTENTS 4.2.1 Consistent Representations I . 68 4.2.2 Consistent Representations II . 71 4.3 Results in Special Groups . 74 4.3.1 Algorithms . 74 5 Knapsack Problem 79 5.1 Results . 80 6 Decoding Problem 81 6.1 Classical Algorithms . 83 6.1.1 Prange's Information Set Decoding . 83 6.1.2 Stern's Decoding . 86 6.1.3 MMT . 89 6.1.4 BJMM . 91 6.2 Application of the Nearest Neighbor Technique . 95 6.2.1 Basic Approach . 95 6.2.2 Using Representations . 98 6.2.3 Nearest Neighbor with MMT . 100 6.2.4 Nearest Neighbor with BJMM . 102 6.3 Conclusion . 106 7 Discrete Logarithm Problem 107 7.1 Known Generic Algorithm . 109 7.2 New Generic Algorithm . 109 7.3 Optimal Splitting . 114 CONTENTS v Danksagung Mein besonderer Dank gilt meinem Doktorvater Alex May, der mir die Gele- genheit gegeben hat an seinem Lehrstuhl zu arbeiten und diese Seiten zu verfassen. Ich habe das Gef¨uhl,dass mir diese gut vier Jahre enorm geholfen haben meinen Horizont zu erweitern und vor allem meine Pr¨asentationsf¨ahigkeiten zu verbessern. Vielen Dank f¨urdie zahlreichen Tage, an denen Du dir meine Ideen angeh¨orthast, die zwar leider fast immer mindestens einen kleinen Haken hatten, mich aber mit Hilfe der Diskussion stets ein wenig weitergebracht haben. Deinen Hinweis { falls wir die Ideen nicht schon vor Ort zerlegt hatten { die Forschung mal f¨ur eine Nacht ruhen zu lassen, um mit einem kleinen Erfolgserlebnis in den Feierabend zu gehen, habe ich leider erst viel zu sp¨atbefolgt. Ein sehr gutes Gef¨uhl,auch wenn ich am n¨achsten Morgen dann doch den Fehler in der Berechnung gefunden hatte. Einen außerordentlichen Dank m¨ochte ich an meine Mutter richten, die sich entschieden hat mit mir nach Deutschland auszuwandern und mir erm¨oglicht hat in diesem Land zur Schule zu gehen und zu studieren. Vielen vielen Dank an meine ganze Familie, Edith, Det, Andi, Maxim und besonders an meine Großmutter f¨urdie dauerhafte Unterst¨utzungund dass ihr immer f¨ur mich da seid! Einen enormen Dank an Pawel, mit dem ich mich vom ersten Semester an mit gemein- samem Powerlernen durchs Studium gequetscht habe. Ohne dich h¨atteich es nicht geschafft, diese Motivation und dieses Tempo durchzuhalten. Vielen Dank f¨urdie ganzen Pizza Hut und Kinosessions, Portal und DayZ und die vielen Runden, die wir um den Kemnader See gedreht haben. Ein großes Thanks an Saqib f¨urdie ganzen Nachtsessions an der Uni, die ganzen Ausfl¨ugezu Burger King und die teilweise hundert km Zugfahrten, um einen Film in OV zu sehen. Durch die Zeit mit dir haben sich meine not the yellow of the egg Englischkenntnisse gef¨uhltmindestens verdoppelt, wof¨urich dir besonders danken m¨ochte! Super vielen Dank an die Partypeople aus dem Schachverein, besonders Ken, Kev, Mio. Die stundenlangen Spielesessions freitags haben mir einfach krass viel Spaß gemacht und waren super wichtig um einfach mal abzuschalten. Auch die K¨ampfeder dritten Mannschaft will ich nicht missen, jederzeit Mitfiebern und Nervenkitzeln pur. Auf viele Jahre mehr! Vielen Dank an Marina, Philipp, Alex und Enrico f¨urdie coole Zeit in Israel und auf Mykonos, die mir einen idealen Einstieg in die akademische Welt erm¨oglicht hat. Vielen Dank an Elena, Robert und Daniel f¨urdie super Zeit in Israel (Teil II) mit Mietwagenfahrt nach Haifa. Vielen Dank auch f¨urdie coole Partyhauszeit in Amherst mit Robert, Pawel und Marc und die nachtr¨agliche Ostk¨ustenrundreise mit Felix, Robert und Jiaxin. Vielen Dank Robert, f¨urdie vielen Youtubevideos, die Zeit in Sofia und ein richtig lustiges Jahr im B¨uro. Vielen Dank an Jiaxin f¨urdie vielen Runden um den See, an Gregor f¨urdas Zweitgutachten und vielen Dank an Carla und Sven, f¨urdie vielen Tipps und Aufmunterungen. Einen besonderen, außeror- dentlichen und enormen Dank an Gotti f¨urdie ganzen Spieleabende und die genialen Antworten auf meine zahlreichen Fragen, an denen ich ohne dich bestimmt Wochen l¨angergesessen h¨atte. Und nat¨urlich vielen vielen vielen Dank an Marion f¨urdie dauerhafte Hilfe mit den Antr¨agen und Formularen, die netten Worte und Hinweise, die mich entspannt durch diese Zeit geleitet haben. Vielen vielen Dank an alle Leute der besten Kryptogruppe der Welt f¨urdiese tolle Zeit! Mein abschließender Dank geht an Latex, das es geschafft hat eine Zweierpotenz an Seiten zu setzen. Sieht doch sicher aus. vi CONTENTS Zusammenfassung In dieser Arbeit wird eine verallgemeinerte Version wichtiger kombina- torischer Probleme, sogenannte Subset Sum Probleme, analysiert. Es werden neue Algorithmen f¨urdiese Klasse von Problemen entwickelt, indem ein sogenanntes Konsistenzproblem untersucht wird, was zu besseren Algorithmen f¨urdas zeroAND Problem und das Nearest Neighbor Prob- lem f¨uhrt.Dies impliziert die bestbekannten Algorithmen f¨urdas Knapsack Problem mit einer Komplexit¨atvon 20:287n und das Dekodierproblem mit Laufzeit 20:097n. Es wird gezeigt, dass die verwendeten Techniken ebenfalls in einem Spezialfall des Diskreten Logarithmusproblems angewandt werden k¨onnen. Summary In this thesis, we present a generalized framework for the study of combinatorial problems, so-called Subset Sum Problems. In this framework, we improve the best known technique for solving this class of problems by identifying a Consistency Problem. We present a more efficient algorithm for this problem, which leads to algorithms for the special cases of the zeroAND Problem and the Nearest Neighbor Problem. This implies the best known algorithm for the Knapsack Problem with time complexity 20:287n and the Decoding Problem with time complexity 20:097n. We show that the studied combinatorial techniques can also be applied to a special case of the Discrete Logarithm Problem. Chapter 1 Introduction The results by Cook [Coo71], Karp [Kar72], and Levin [Lev73] on NP-completeness identify a large class of problems that are equally hard, i.e. once there is an efficient algorithm for only one of these problems, it directly implies an efficient algorithm for all the problems from the class. An algorithm for a problem is called efficient if it runs in time polynomially in the input size for all instances of the problem. An unsolved question \P =? NP" in computer science is, if there either exist efficient algorithms for this class of problems, or if it can be shown that there are none. NP-hard problems are a larger class of problems that are at least as hard as NP-complete problems. A decision problem simply asks if a problem instance is solvable or not, whereas a computational problem also asks to find a solution. In this thesis, we want to concentrate on two NP-hard computational problems, the Knapsack Problem [Kar72] and the Decoding Problem [BMvT78]. The Knapsack Problem asks to find a subset of n given integers a1; : : : ; an 2 Z that sums to a target integer s 2 Z. The Decoding Problem asks to find a closest codeword of a linear code C to a given vector x. Notice that even though there might be instances of NP-hard problems that are indeed difficult to solve, this is clearly not true for all instances. An important task for the field of cryptography is therefore to identify a certain subset of hard instances. Ideally, one would like to link the worst-case hardness (i.e. the hardness of the hardest instance) to some average- case hardness, which is the hardness of an instance chosen from some distribution. Indeed, Ajtai [Ajt96] was able to show a reduction between the average-case and the worst-case of a so-called Unique Shortest Vector Problem (USVP). That is, if one is able to break a certain average-case instance (e.g. an instance of a cryptographic scheme like [AD97]) efficiently, one is able to solve any and, therefore, also the hardest instance of the USVP efficiently. However, Ajtai wasn't able to show that the USVP is NP-hard, which is only provable for the Shortest Vector Problem. This indicates that it doesn't seem to be possible to link the average-case hardness and the worst-case hardness of NP-hard problems. However, even if such a link could be established, it is still unclear how fast the problems can be solved for practical instances. Therefore, one has to rely on identifying instances that are practically hard to solve by analyzing the best algorithms for these instances.
Recommended publications
  • Optimal Sequential Multi-Way Number Partitioning Richard E
    Optimal Sequential Multi-Way Number Partitioning Richard E. Korf, Ethan L. Schreiber, and Michael D. Moffitt Computer Science Department University of California, Los Angeles Los Angeles, CA 90095 IBM Corp. 11400 Burnet Road Austin, TX 78758 [email protected], [email protected], moffi[email protected] Abstract partitioning is a special case of this problem, where the tar- get value is half the sum of all the integers. We describe five Given a multiset of n positive integers, the NP-complete different optimal algorithms for these problems. A sixth al- problem of number partitioning is to assign each integer to gorithm, dynamic programming, is not competitive in either one of k subsets, such that the largest sum of the integers assigned to any subset is minimized. Last year, three differ- time or space (Korf and Schreiber 2013). ent papers on optimally solving this problem appeared in the literature, two from the first two authors, and one from the Inclusion-Exclusion (IE) third author. We resolve here competing claims of these pa- Perhaps the simplest way to generate all subsets of a given pers, showing that different algorithms work best for differ- set is to search a binary tree depth-first, where each level cor- ent values of n and k, with orders of magnitude differences in responds to a different element. Each node includes the ele- their performance. We combine the best ideas from both ap- ment on the left branch, and excludes it on the right branch. proaches into a new algorithm, called sequential number par- The leaves correspond to complete subsets.
    [Show full text]
  • (R18A0507) Design and Analysis of Algorithms.Pdf
    DESIGN AND ANALYSIS OF ALGORITHMS [R18A0507] LECTURE NOTES B.TECH III YEAR – I SEM (R18) (2020-2021) DEPARTMENT OF INFORMATION TECHNOLOGY MALLA REDDY COLLEGE OF ENGINEERING & TECHNOLOGY (Autonomous Institution – UGC, Govt. of India) Recognized under 2(f) and 12 (B) of UGC ACT 1956 (Affiliated to JNTUH, Hyderabad, Approved by AICTE - Accredited by NBA & NAAC – „A‟ Grade - ISO 9001:2015 Certified) Maisammaguda, Dhulapally (Post Via. Hakimpet), Secunderabad – 500100, Telangana State, India IIIYearB.Tech.IT–ISem L T/P/D C 3 -/-/- 3 (R18A0507) Design And Analysis of Algorithms Objectives: To analyze performance ofalgorithms. To choose the appropriate data structure and algorithm design method for a specifiedapplication. To understand how the choice of data structures and algorithm design methodsimpacts the performance ofprograms. To solve problems using algorithm design methods such as the greedy method,divide and conquer, dynamic programming, backtracking and branch andbound. To understand the differences between tractable and intractableproblems. To introduce P and NPclasses. UNIT I: Introduction-Algorithm definition, Algorithm Specification, Performance Analysis- Space complexity, Time complexity, Randomized Algorithms. Divide and conquer- General method, applications - Binary search, Merge sort, Quick sort, Strassen‟s Matrix Multiplication. UNIT II: Disjoint set operations, union and find algorithms, AND/OR graphs, Connected Components and Spanning trees, Bi-connected components Backtracking-Generalmethod,applications-The8-queenproblem,sumofsubsetsproblem,graphcoloring,
    [Show full text]
  • An Efficient Fully Polynomial Approximation Scheme for the Subset-Sum Problem
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Journal of Computer and System Sciences 66 (2003) 349–370 http://www.elsevier.com/locate/jcss An efficient fully polynomial approximation scheme for the Subset-Sum Problem Hans Kellerer,a,Ã Renata Mansini,b Ulrich Pferschy,a and Maria Grazia Speranzac a Institut fu¨r Statistik und Operations Research, Universita¨t Graz, Universita¨tsstr. 15, A-8010 Graz, Austria b Dipartimento di Elettronica per l’Automazione, Universita` di Brescia, via Branze 38, I-25123 Brescia, Italy c Dipartimento Metodi Quantitativi, Universita` di Brescia, Contrada S. Chiara 48/b, I-25122 Brescia, Italy Received 20 January 2000; revised 24 June 2002 Abstract Given a set of n positive integers and a knapsack of capacity c; the Subset-Sum Problem is to find a subset the sum of which is closest to c without exceeding the value c: In this paper we present a fully polynomial approximation scheme which solves the Subset-Sum Problem with accuracy e in time Oðminfn Á 1=e; n þ 1=e2 logð1=eÞgÞ and space Oðn þ 1=eÞ: This scheme has a better time and space complexity than previously known approximation schemes. Moreover, the scheme always finds the optimal solution if it is smaller than ð1 À eÞc: Computational results show that the scheme efficiently solves instances with up to 5000 items with a guaranteed relative error smaller than 1/1000. r 2003 Elsevier Science (USA). All rights reserved. Keywords: Subset-sum problem; Worst-case performance; Fully polynomial approximation scheme; Knapsack problem 1.
    [Show full text]
  • Lecture Notes for Subset Sum Introduction Problem Definition
    CSE 6311 : Advanced Computational Models and Algorithms Jan 28, 2010 Lecture Notes For Subset Sum Professor: Dr.Gautam Das Lecture by: Saravanan Introduction Subset sum is one of the very few arithmetic/numeric problems that we will discuss in this class. It has lot of interesting properties and is closely related to other NP-complete problems like Knapsack . Even though Knapsack was one of the 21 problems proved to be NP-Complete by Richard Karp in his seminal paper, the formal definition he used was closer to subset sum rather than Knapsack. Informally, given a set of numbers S and a target number t, the aim is to find a subset S0 of S such that the elements in it add up to t. Even though the problem appears deceptively simple, solving it is exceeding hard if we are not given any additional information. We will later show that it is an NP-Complete problem and probably an efficient algorithm may not exist at all. Problem Definition The decision version of the problem is : Given a set S and a target t does there exist a 0 P subset S ⊆ S such that t = s2S0 s . Exponential time algorithm approaches One thing to note is that this problem becomes polynomial if the size of S0 is given. For eg,a typical interview question might look like : given an array find two elements that add up to t. This problem is perfectly polynomial and we can come up with a straight forward O(n2) algorithm using nested for loops to solve it.
    [Show full text]
  • Space-Efficient Approximations for Subset Sum ⋆
    Space-Efficient Approximations for Subset Sum ? Anna G´al1??, Jing-Tang Jang2, Nutan Limaye3, Meena Mahajan4, and Karteek Sreenivasaiah5??? 1 University of Texas at Austin, [email protected] 2 Google, Mountain View, [email protected] 3 IIT Bombay, [email protected] 4 Institute of Mathematical Sciences, Chennai, [email protected] 5 Max-Planck Institute for Informatics, Saarbr¨ucken, [email protected] Abstract. SubsetSum is a well known NP-complete problem: given t 2 Z+ and a set S of m positive integers, output YES if and only if there is a subset S0 ⊆ S such that the sum of all numbers in S0 equals t. The problem and its search and optimization versions are known to be solvable in pseudo-polynomial time in general. log t We develop a 1-pass deterministic streaming algorithm that uses space O and decides if some subset of the input stream adds up to a value in the range f(1 ± )tg. Using this algorithm, we design space efficient Fully Polynomial-Time Approximation Schemes (FPTAS) solving the search and opti- 1 2 1 mization versions of SubsetSum. Our algorithms run in O( m ) time and O( ) space on unit cost RAMs, where 1 + is the approximation factor. This implies constant space quadratic time FPTAS on unit cost RAMs when is a constant. Previous FPTAS used space linear in m. In addition, we show that on certain inputs, when a solution is located within a short prefix of the input sequence, our algorithms may run in sublinear time. We apply our techniques to the problem of finding balanced separators, and we extend our results to some other variants of the more general knapsack problem.
    [Show full text]
  • Solving K-SUM Using Few Linear Queries
    Solving k-SUM Using Few Linear Queries Jean Cardinal∗1, John Iacono†2, and Aurélien Ooms‡3 1 Université libre de Bruxelles (ULB), Brussels, Belgium [email protected] 2 New York University, New York, USA [email protected] 3 Université libre de Bruxelles (ULB), Brussels, Belgium [email protected] Abstract The k-SUM problem is given n input real numbers to determine whether any k of them sum to zero. The problem is of tremendous importance in the emerging field of complexity theory within c k P , and it is in particular open whether it admits an algorithm of complexity O(n ) with c < d 2 e. Inspired by an algorithm due to Meiser (1993), we show that there exist linear decision trees and algebraic computation trees of depth O(n3 log2 n) solving k-SUM. Furthermore, we show d k e+8 3 2 that there exists a randomized algorithm that runs in O˜(n 2 ) time, and performs O(n log n) linear queries on the input. Thus, we show that it is possible to have an algorithm with a runtime almost identical (up to the +8) to the best known algorithm but for the first time also with the number of queries on the input a polynomial that is independent of k. The O(n3 log2 n) bound on the number of linear queries is also a tighter bound than any known algorithm solving k-SUM, even allowing unlimited total time outside of the queries. By simultaneously achieving few queries to the input without significantly sacrificing runtime vis-à-vis known algorithms, we deepen the understanding of this canonical problem which is a cornerstone of complexity-within-P .
    [Show full text]
  • A Fast Algorithm for Performance-Driven Module Implementation Selection*
    VLSI DESIGN (C) 1999 OPA (Overseas Publishers Association) N.V. 1999, Vol. 10, No. 2, pp. 237-247 Published by license under Reprints available directly from the publisher the Gordon and Breach Science Photocopying permitted by license only Publishers imprint. Printed in Malaysia. A Fast Algorithm for Performance-Driven Module Implementation Selection* EDWARD Y. C. CHENG and SARTAJ SAHNI Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL 32611-6120 (Received 24 March 1998," In finalform 8 February 1999) We develop an O( p log n) time algorithm to obtain optimal solutions to the p-pin n-net single channel performance-driven implementation selection problem in which each module has at most two possible implementations (2-PDMIS). Although Her, Wang and Wong [1] have also developed an O(p log n) algorithm for this problem, experi- ments indicate that our algorithm is twice as fast on small circuits and up to eleven times as fast on larger circuits. We also develop an O(pnc-l) time algorithm for the c, c > 1, channel version of the 2-PDMIS problem. Keywords and Phrases." Channel density, net span constraint, performance-driven module imple- mentation selection, complexity 1. INTRODUCTION routing problem. Channel routing with rearrange- able pins was studied by Kobayashi and Drozd [3]. In the channel routing problem, we have a routing They proposed a three step algorithm (1) permute channel with modules on the top and bottom of pins so as to maximize the number of aligned pin the channel, the modules have pins and subsets of pairs (a pair of pins on different sides of the chan- pins define nets.
    [Show full text]
  • Lattice-Based Algorithms for Number Partitioning in the Hard Phase
    Lattice-based Algorithms for Number Partitioning in the Hard Phase Bala Krishnamoorthy, William Webb, and Nathan Moyer Department of Mathematics, Washington State University, Pullman WA {bkrishna, webb, nmoyer}@math.wsu.edu Abstract The number partitioning problem (NPP) is to divide n numbers a1, . , an into two disjoint subsets such that the difference between the two subset sums – the discrepancy, ∆, is minimized. In the balanced version of NPP (BalNPP), the subsets must have the same cardinality. With n ajs chosen uniformly from [1,R], R > 2 gives the hard phase, when there are no equal partitions (i.e., ∆ = 0) with high probability (whp). In this phase, the minimum partition is also unique whp. Most current methods struggle in the hard phase, as they often perform exhaustive enumeration of all partitions to find the optimum. We propose reductions of NPP and BalNPP in the hard phase to the closest vector problem (CVP). We can solve the original problems by making polynomial numbers of calls to a CVP oracle. In practice, we implement a heuristic which applies basis reduction (BR) to several CVP instances (less than 2n in most cases). This method finds near-optimal solutions without proof of optimality to NPP problems with reasonably large dimensions – up to n = 75. second, we propose a truncated NPP algorithm, which finds approximate minimum discrepancies for instances on which the BR approach is not effective. In place of the original instance, we solve a modified instance witha ¯j = baj/T e for some T ≤ R. We show that the expected ∗ optimal discrepancy of the original problem given by the truncated solution, E (∆T ), is not much ∗ ∗ different from the expected optimal discrepancy: E (∆T ) ≤ E (∆ ) + nT/2.
    [Show full text]
  • R COMPUTATIONALLY RELATED PROBLEMS·
    1 r ir SIAM J. COMPUT. Vol. 3. No.4, December 1974 COMPUTATIONALLY RELATED PROBLEMS· SARTAJ SAHNIt Abstract. We look at several problems from areas such as network flows, game theory, artificial intelligence, graph theory, integer programming and nonlinear programming and show that they are \. related in that anyone of these problems is solvable in polynomial time iff all the others are, too. At i present, no polynomial time algorithm for these problems is known. These problems extend the equivalence class of problems known as P-Complete. The problem of deciding whether the class of 1 languages accepted by polynomial time nondeterministic Turing machines is the same as that accepted 1 by polynomial time deterministic Turing machines is related to P-Complete problems in that these two classes of languages are the same iff each P-Complete problem has a polynomial deterministic solution. l In view of this, it appears very likely that this equivalence class defines a class of problems that cannot be solved in deterministic polynomial time. Key words. complexity, polynomial reducibility, deterministic and nondeterministic algorithms, network flows, game theory. optimization, AND/OR graphs 1. Introduction. Cook [3J showed that determining whether the class of languages accepted by nondeterministic Turing machines operating in polynomial time was the same as that accepted by deterministic polynomial time bounded Turing machines was as hard as deciding if there was a deterministic polynomial algorithm for the satisfiability problem of propositional calculas (actually, Cook showed that there was a polynomial algorithm for satisfiability iff the determin­ istic and nondeterministic polynomial time languages were the same).
    [Show full text]
  • Computing Biconnected Components on a Hypercube*
    -- -- COMPUTING BICONNECTED COMPONENTS ON A HYPERCUBE* Jinwoon Woo and Sartaj Sahni University of Minnesota Abstract We describe two hypercube algorithms to ®nd the biconnected components (i.e., blocks) of a connected undirected graph. One is a modi®ed version of the Tarjan-Vishkin algorithm. The two hypercube algorithms were experimentally evaluated on an NCUBE/7 MIMD hyper- cube computer. The two algorithms have comparable performance and ef®ciencies as high as 0.7 were observed. Keywords and phrases Hypercube computing, MIMD computer, parallel programming, biconnected components __________________ * This research was supported in part by the National Science Foundation under grants DCR84-20935 and MIP 86-17374 1 -- -- 2 1. INTRODUCTION In this paper we develop two biconnected component (i.e., block) algorithms suitable for medium grained MIMD hypercubes. The ®rst algorithm is an adaptation of the algorithm of Tarjan and Vishkin [TARJ85]. Tarjan and Vishkin provide parallel CRCW and CREW PRAM implementations of their algorithm. The CRCW PRAM implementation of Tarjan and Vishkin runs in O(logn) time and uses O(n +m) processors. Here n and m are, respectively, the number of vertices and edges in the input connected graph. The CREW PRAM implemen- tation runs in O(log2n) time using O(n 2/log2n) processors. A PRAM algorithm that use p processors and t time can be simulated by a p processor hypercube in O(tlog2p) time using the random access read and write algorithms of Nassimi and Sahni [NASS81]. The CREW PRAM algorithm of [TARJ85] therefore results in an O(log3n) time O(n +m) processor hypercube algorithm.
    [Show full text]
  • The SUBSET-SUM Problem
    CMPSCI611: The SUBSET-SUM Problem Lecture 18 We begin today with the problem we didn’t get to at the end of last lecture – the SUBSET-SUM problem, which we also saw back in Lecture 8. The input to SUBSET- SUM is a set of numbers {a1, . , an} and a target num- ber t, and we ask whether there is a subset of the numbers that add exactly to t. Using dynamic programming, we showed that we could decide this language in time that is polynomial in n and s, the sum of all the ai. Now we allow the numbers to get larger, so that they now might be n bits long. The problem is still in NP, because we can guess a subset by guessing a bitvector, add the numbers in the set, and verify that we get t. But it’s no longer clear that we are in P, and in fact we will now see that the general problem is NP-complete. We reduce 3-SAT to SUBSET-SUM (with large num- bers). We first assume that every clause in our input for- mula has exactly three literals – we can just repeat literals in the same clause to make this true. Our numbers will be represented in decimal notation, with a column for each of the v variables and a column for each clause in the formula. 1 We’ll create an item ai for each of the 2v literals. This item will have a 1 in the column for its variable, a 1 in the column of each clause where the literal appears, and zeroes everywhere else.
    [Show full text]
  • A Hybrid Recursive Multi-Way Number Partitioning Algorithm Richard E
    Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence A Hybrid Recursive Multi-Way Number Partitioning Algorithm Richard E. Korf Computer Science Department University of California, Los Angeles Los Angeles, CA 90095 [email protected] Abstract to maximize the smallest subset sum, which is the objective function for the voting manipulation application. The number partitioning problem is to divide a Minimizing the largest subset sum also allows our number- given set of n positive integers into k subsets, so partitioning algorithms to be directly applied to bin packing. that the sum of the numbers in each subset are In bin packing, each of a set of numbers is assigned to a bin of as nearly equal as possible. While effective algo- fixed capacity, so that the sum of the numbers in each bin do rithms for two-way partitioning exist, multi-way not exceed the bin capacity, while minimizing the number of partitioning is much more challenging. We intro- bins used. In practice, heuristic approximations for bin pack- duce an improved algorithm for optimal multi-way ing, such as best-fit decreasing, use only a few more bins than partitioning, by combining several existing algo- a simple lower bound, such as the sum of all numbers divided rithms with some new extensions. We test our al- by the bin capacity. Thus, an effective bin-packing strategy gorithm for partitioning 31-bit integers from three is to allocate a fixed number of bins, and then to iteratively to ten ways, and demonstrate orders of magnitude reduce the number of bins until a solution is no longer possi- speedup over the previous state of the art.
    [Show full text]