An Efficient Fully Polynomial Approximation Scheme for the Subset-Sum Problem

Total Page:16

File Type:pdf, Size:1020Kb

An Efficient Fully Polynomial Approximation Scheme for the Subset-Sum Problem View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Journal of Computer and System Sciences 66 (2003) 349–370 http://www.elsevier.com/locate/jcss An efficient fully polynomial approximation scheme for the Subset-Sum Problem Hans Kellerer,a,à Renata Mansini,b Ulrich Pferschy,a and Maria Grazia Speranzac a Institut fu¨r Statistik und Operations Research, Universita¨t Graz, Universita¨tsstr. 15, A-8010 Graz, Austria b Dipartimento di Elettronica per l’Automazione, Universita` di Brescia, via Branze 38, I-25123 Brescia, Italy c Dipartimento Metodi Quantitativi, Universita` di Brescia, Contrada S. Chiara 48/b, I-25122 Brescia, Italy Received 20 January 2000; revised 24 June 2002 Abstract Given a set of n positive integers and a knapsack of capacity c; the Subset-Sum Problem is to find a subset the sum of which is closest to c without exceeding the value c: In this paper we present a fully polynomial approximation scheme which solves the Subset-Sum Problem with accuracy e in time Oðminfn Á 1=e; n þ 1=e2 logð1=eÞgÞ and space Oðn þ 1=eÞ: This scheme has a better time and space complexity than previously known approximation schemes. Moreover, the scheme always finds the optimal solution if it is smaller than ð1 À eÞc: Computational results show that the scheme efficiently solves instances with up to 5000 items with a guaranteed relative error smaller than 1/1000. r 2003 Elsevier Science (USA). All rights reserved. Keywords: Subset-sum problem; Worst-case performance; Fully polynomial approximation scheme; Knapsack problem 1. Introduction Given a set of n items En ¼f1; y; ng each having a positive integer weight wj ð j ¼ 1; y; nÞ and a knapsack of capacity c; the Subset-Sum Problem (SSP) is to select a subset E of En such that the corresponding total weight wðEÞ is closest to c without exceeding c: Formally, the SSP ÃCorresponding author. E-mail addresses: [email protected] (H. Kellerer), [email protected] (R. Mansini), pferschy@ uni-graz.at (U. Pferschy), [email protected] (M.G. Speranza). 0022-0000/03/$ - see front matter r 2003 Elsevier Science (USA). All rights reserved. doi:10.1016/S0022-0000(03)00006-0 350 H. Kellerer et al. / Journal of Computer and System Sciences 66 (2003) 349–370 is defined as follows: Pn maximize wjxj j¼1 Pn subject to wjxjpc; xjAf0; 1gðj ¼ 1; y; nÞ; j¼1 ( 1 if item j is selected; where xj ¼ 0 otherwise: P n X y We assume, without loss of generality, that j¼1 wj c and wjpc for j ¼ 1; ; n: The SSP is a special case of the 0–1 Knapsack Problem arising when the profit and the weight associated with each item j are identical. A large number of theoretical and practical papers has appeared on this problem. An extensive overview on the literature is contained in the excellent book by Martello and Toth [18]. The SSP is well-known to be NP-hard [4]. Therefore, all exact algorithms for the SSP are pseudopolynomial. The classical dynamic programming approach has running time OðncÞ and requires OðncÞ memory. An optimal algorithm with improved complexity is due to Pisinger [19]. As for all NP-hard problems, it is interesting to look for suboptimal solutions which are within a predefined range of the optimal value, provided that the time and space requirements are reasonably small, i.e. bounded by a polynomial. The most common method to judge the quality of an approximationP algorithm is its worst-case à à performance. Define by X the optimal set of items and by z ¼ jAX à wj the optimal solution value of the SSP. Analogously, let X H be the set of items selected by a heuristic H and zH the corresponding solution value. A heuristic H for the SSP is an ð1 À eÞ-approximation algorithm ð0oeo1Þ if for any instance zH Xð1 À eÞzà ð1Þ holds. The parameter e is called the worst-case relative error.Afully polynomial approximation scheme is a heuristic H which, given an instance I and any relative error e; returns a solution value which obeys (1) and is polynomial both in the length of the encoded input and in 1=e: The first fully polynomial approximation scheme for the Subset-Sum Problem was suggested by Ibarra and Kim [8]. They partition the items into small and large items. The weights of the large items are scaled and then the problem with scaled weights and capacity is solved optimally through dynamic programming. The small items are added afterwards using a greedy-type algorithm. Their approach has time complexity Oðn Á 1=e2Þ and space complexity Oðn þ 1=e3Þ: Lawler [14] improved the scheme of Ibarra and Kim by a direct transfer of a scheme for the knapsack problem which uses a more efficient method of scaling. His algorithm has only Oðn þ 1=e4Þ time and Oðn þ 1=e3Þ memory requirement. Note that the special algorithm proposed in his paper for subset-sum does not work, since he makes the erroneous proposal to round up the item values. As an improvement, Lawler claims in his paper that a combination of his approach (which is 2 1 not correct) with a result by Karp [10] would give a running time of Oðn þ 1=e logðeÞÞ: Karp H. Kellerer et al. / Journal of Computer and System Sciences 66 (2003) 349–370 351 1þe 2 presents in [10] an algorithm for subset sum with running time nð e Þ log1þe 2 which is Oðn Á 1=e Þ: Lawler states that replacing n by the number of large items Oðð1=eÞ logð1=eÞÞ would give a running 2 1 1 time of Oðn þ 1=e logðeÞÞ: It can be easily checked that a factor of e is missing in the second term of the expression. Possibly, this mistake originates from the fact that there is a misprint in Karp’s 1þe 1þe paper, giving a running time of nð 2 Þ log1þe 2 instead of the correct nð e Þ log1þe 2: The approach by Gens and Levner [5,6] is based on a different idea. They use a dynamic programming procedure where at each iteration solution values are eliminated which are different from each other by at least a threshold value depending on e: The corresponding solution set is then determined by standard backtracking. Their algorithm solves the Subset-Sum Problem in Oðn Á 1=eÞ time and space. In 1994 Gens and Levner [7] presented an improved fully polynomial approximation scheme based on the same idea. The algorithm finds an approximate solution with relative error less than e in time Oðminfn=e; n þ 1=e3gÞ and space Oðminfn=e; n þ 1=e2gÞ: Our algorithm requires Oðminfn Á 1=e; n þ 1=e2 logð1=eÞgÞ time and Oðn þ 1=eÞ space. A short description of the algorithm has appeared as extended abstract in [13]. The paper is organized as follows: In Section 2 we first present the general structure of the algorithm in an informal way, afterwards our fully polynomial approximation scheme will be described extensively in a technical way. Its correctness, its asymptotic running time and its space requirements are analyzed in Section 3. Section 4 contains computational results and, finally, concluding remarks are given in Section 5. 2. The fully polynomial approximation scheme 2.1. Informal description of the algorithm As our approach is rather involved we try to give an intuition of the approximation scheme in an informal way. The detailed algorithm is presented in Section 2.2. We will explain the algorithm step by step starting from Bellmans procedure for calculating the optimal solution, then doing several modifications which yield better time and space requirements and finally reaching the FPTAS with the claimed time and space bounds. The well-known original dynamic programming approach by Bellman [1] solves the Subset- Sum Problem optimally in the following way: The set R of reachable values consists of integers i less than or equal to the capacity c for which a subset of items exists with total weight equal to i: Starting from the empty set, R is constructed iteratively in n iterations by adding in iteration j weight wj to all elements from R and keeping only partial sums not exceeding the capacity. For each value iAR a corresponding solution set with total weight equal to i is stored. This gives a pseudopolynomial algorithm with time OðncÞ and space OðncÞ: In order to obtain an FPTAS, the items are at first separated into small items (having weight pec) and large items. It can be seen easily that any ð1 À eÞ-approximation algorithm for the large items remains an ð1 À eÞ-approximation algorithm for the whole item set if we assign the small items in the end of the algorithm in a greedy way. (This is done in Step 4 of our algorithm.) Therefore, we will deal only with large items in the further considerations. The interval containing the large items ec; c is again partitioned into Oð1=eÞ subintervals of equal length ec (see Step 1). 352 H. Kellerer et al. / Journal of Computer and System Sciences 66 (2003) 349–370 J1n J1n Then, from each subinterval Ij :¼ jec; ð j þ 1Þec the (at most) ej À 1 smallest and ej À 1 biggest items are selected. All these large items are collected in the so-called set of relevant items L and the other items are discarded (see Step 2).
Recommended publications
  • Optimal Sequential Multi-Way Number Partitioning Richard E
    Optimal Sequential Multi-Way Number Partitioning Richard E. Korf, Ethan L. Schreiber, and Michael D. Moffitt Computer Science Department University of California, Los Angeles Los Angeles, CA 90095 IBM Corp. 11400 Burnet Road Austin, TX 78758 [email protected], [email protected], moffi[email protected] Abstract partitioning is a special case of this problem, where the tar- get value is half the sum of all the integers. We describe five Given a multiset of n positive integers, the NP-complete different optimal algorithms for these problems. A sixth al- problem of number partitioning is to assign each integer to gorithm, dynamic programming, is not competitive in either one of k subsets, such that the largest sum of the integers assigned to any subset is minimized. Last year, three differ- time or space (Korf and Schreiber 2013). ent papers on optimally solving this problem appeared in the literature, two from the first two authors, and one from the Inclusion-Exclusion (IE) third author. We resolve here competing claims of these pa- Perhaps the simplest way to generate all subsets of a given pers, showing that different algorithms work best for differ- set is to search a binary tree depth-first, where each level cor- ent values of n and k, with orders of magnitude differences in responds to a different element. Each node includes the ele- their performance. We combine the best ideas from both ap- ment on the left branch, and excludes it on the right branch. proaches into a new algorithm, called sequential number par- The leaves correspond to complete subsets.
    [Show full text]
  • Lecture Notes for Subset Sum Introduction Problem Definition
    CSE 6311 : Advanced Computational Models and Algorithms Jan 28, 2010 Lecture Notes For Subset Sum Professor: Dr.Gautam Das Lecture by: Saravanan Introduction Subset sum is one of the very few arithmetic/numeric problems that we will discuss in this class. It has lot of interesting properties and is closely related to other NP-complete problems like Knapsack . Even though Knapsack was one of the 21 problems proved to be NP-Complete by Richard Karp in his seminal paper, the formal definition he used was closer to subset sum rather than Knapsack. Informally, given a set of numbers S and a target number t, the aim is to find a subset S0 of S such that the elements in it add up to t. Even though the problem appears deceptively simple, solving it is exceeding hard if we are not given any additional information. We will later show that it is an NP-Complete problem and probably an efficient algorithm may not exist at all. Problem Definition The decision version of the problem is : Given a set S and a target t does there exist a 0 P subset S ⊆ S such that t = s2S0 s . Exponential time algorithm approaches One thing to note is that this problem becomes polynomial if the size of S0 is given. For eg,a typical interview question might look like : given an array find two elements that add up to t. This problem is perfectly polynomial and we can come up with a straight forward O(n2) algorithm using nested for loops to solve it.
    [Show full text]
  • Space-Efficient Approximations for Subset Sum ⋆
    Space-Efficient Approximations for Subset Sum ? Anna G´al1??, Jing-Tang Jang2, Nutan Limaye3, Meena Mahajan4, and Karteek Sreenivasaiah5??? 1 University of Texas at Austin, [email protected] 2 Google, Mountain View, [email protected] 3 IIT Bombay, [email protected] 4 Institute of Mathematical Sciences, Chennai, [email protected] 5 Max-Planck Institute for Informatics, Saarbr¨ucken, [email protected] Abstract. SubsetSum is a well known NP-complete problem: given t 2 Z+ and a set S of m positive integers, output YES if and only if there is a subset S0 ⊆ S such that the sum of all numbers in S0 equals t. The problem and its search and optimization versions are known to be solvable in pseudo-polynomial time in general. log t We develop a 1-pass deterministic streaming algorithm that uses space O and decides if some subset of the input stream adds up to a value in the range f(1 ± )tg. Using this algorithm, we design space efficient Fully Polynomial-Time Approximation Schemes (FPTAS) solving the search and opti- 1 2 1 mization versions of SubsetSum. Our algorithms run in O( m ) time and O( ) space on unit cost RAMs, where 1 + is the approximation factor. This implies constant space quadratic time FPTAS on unit cost RAMs when is a constant. Previous FPTAS used space linear in m. In addition, we show that on certain inputs, when a solution is located within a short prefix of the input sequence, our algorithms may run in sublinear time. We apply our techniques to the problem of finding balanced separators, and we extend our results to some other variants of the more general knapsack problem.
    [Show full text]
  • Solving K-SUM Using Few Linear Queries
    Solving k-SUM Using Few Linear Queries Jean Cardinal∗1, John Iacono†2, and Aurélien Ooms‡3 1 Université libre de Bruxelles (ULB), Brussels, Belgium [email protected] 2 New York University, New York, USA [email protected] 3 Université libre de Bruxelles (ULB), Brussels, Belgium [email protected] Abstract The k-SUM problem is given n input real numbers to determine whether any k of them sum to zero. The problem is of tremendous importance in the emerging field of complexity theory within c k P , and it is in particular open whether it admits an algorithm of complexity O(n ) with c < d 2 e. Inspired by an algorithm due to Meiser (1993), we show that there exist linear decision trees and algebraic computation trees of depth O(n3 log2 n) solving k-SUM. Furthermore, we show d k e+8 3 2 that there exists a randomized algorithm that runs in O˜(n 2 ) time, and performs O(n log n) linear queries on the input. Thus, we show that it is possible to have an algorithm with a runtime almost identical (up to the +8) to the best known algorithm but for the first time also with the number of queries on the input a polynomial that is independent of k. The O(n3 log2 n) bound on the number of linear queries is also a tighter bound than any known algorithm solving k-SUM, even allowing unlimited total time outside of the queries. By simultaneously achieving few queries to the input without significantly sacrificing runtime vis-à-vis known algorithms, we deepen the understanding of this canonical problem which is a cornerstone of complexity-within-P .
    [Show full text]
  • Lattice-Based Algorithms for Number Partitioning in the Hard Phase
    Lattice-based Algorithms for Number Partitioning in the Hard Phase Bala Krishnamoorthy, William Webb, and Nathan Moyer Department of Mathematics, Washington State University, Pullman WA {bkrishna, webb, nmoyer}@math.wsu.edu Abstract The number partitioning problem (NPP) is to divide n numbers a1, . , an into two disjoint subsets such that the difference between the two subset sums – the discrepancy, ∆, is minimized. In the balanced version of NPP (BalNPP), the subsets must have the same cardinality. With n ajs chosen uniformly from [1,R], R > 2 gives the hard phase, when there are no equal partitions (i.e., ∆ = 0) with high probability (whp). In this phase, the minimum partition is also unique whp. Most current methods struggle in the hard phase, as they often perform exhaustive enumeration of all partitions to find the optimum. We propose reductions of NPP and BalNPP in the hard phase to the closest vector problem (CVP). We can solve the original problems by making polynomial numbers of calls to a CVP oracle. In practice, we implement a heuristic which applies basis reduction (BR) to several CVP instances (less than 2n in most cases). This method finds near-optimal solutions without proof of optimality to NPP problems with reasonably large dimensions – up to n = 75. second, we propose a truncated NPP algorithm, which finds approximate minimum discrepancies for instances on which the BR approach is not effective. In place of the original instance, we solve a modified instance witha ¯j = baj/T e for some T ≤ R. We show that the expected ∗ optimal discrepancy of the original problem given by the truncated solution, E (∆T ), is not much ∗ ∗ different from the expected optimal discrepancy: E (∆T ) ≤ E (∆ ) + nT/2.
    [Show full text]
  • The SUBSET-SUM Problem
    CMPSCI611: The SUBSET-SUM Problem Lecture 18 We begin today with the problem we didn’t get to at the end of last lecture – the SUBSET-SUM problem, which we also saw back in Lecture 8. The input to SUBSET- SUM is a set of numbers {a1, . , an} and a target num- ber t, and we ask whether there is a subset of the numbers that add exactly to t. Using dynamic programming, we showed that we could decide this language in time that is polynomial in n and s, the sum of all the ai. Now we allow the numbers to get larger, so that they now might be n bits long. The problem is still in NP, because we can guess a subset by guessing a bitvector, add the numbers in the set, and verify that we get t. But it’s no longer clear that we are in P, and in fact we will now see that the general problem is NP-complete. We reduce 3-SAT to SUBSET-SUM (with large num- bers). We first assume that every clause in our input for- mula has exactly three literals – we can just repeat literals in the same clause to make this true. Our numbers will be represented in decimal notation, with a column for each of the v variables and a column for each clause in the formula. 1 We’ll create an item ai for each of the 2v literals. This item will have a 1 in the column for its variable, a 1 in the column of each clause where the literal appears, and zeroes everywhere else.
    [Show full text]
  • A Hybrid Recursive Multi-Way Number Partitioning Algorithm Richard E
    Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence A Hybrid Recursive Multi-Way Number Partitioning Algorithm Richard E. Korf Computer Science Department University of California, Los Angeles Los Angeles, CA 90095 [email protected] Abstract to maximize the smallest subset sum, which is the objective function for the voting manipulation application. The number partitioning problem is to divide a Minimizing the largest subset sum also allows our number- given set of n positive integers into k subsets, so partitioning algorithms to be directly applied to bin packing. that the sum of the numbers in each subset are In bin packing, each of a set of numbers is assigned to a bin of as nearly equal as possible. While effective algo- fixed capacity, so that the sum of the numbers in each bin do rithms for two-way partitioning exist, multi-way not exceed the bin capacity, while minimizing the number of partitioning is much more challenging. We intro- bins used. In practice, heuristic approximations for bin pack- duce an improved algorithm for optimal multi-way ing, such as best-fit decreasing, use only a few more bins than partitioning, by combining several existing algo- a simple lower bound, such as the sum of all numbers divided rithms with some new extensions. We test our al- by the bin capacity. Thus, an effective bin-packing strategy gorithm for partitioning 31-bit integers from three is to allocate a fixed number of bins, and then to iteratively to ten ways, and demonstrate orders of magnitude reduce the number of bins until a solution is no longer possi- speedup over the previous state of the art.
    [Show full text]
  • A Complete Anytime Algorithm for Number Partitioning
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Artificial Intelligence Artificial Intelligence 106 ( 1998) 18 I-203 A complete anytime algorithm for number partitioning Richard E. Korf ’ Computer Science Depurtment, Universiy of Califwnia, Los Angdrs. CA 90095, USA Received 13 August 1997: received in revised form 13 August 1998 Abstract Given a set of numbers, the two-way number partitioning problem is to divide them into two subsets, so that the sum of the numbers in each subset are as nearly equal as possible. The problem is NP-complete. Based on a polynomial-time heuristic due to Karma&x and Karp, we present a new algorithm, called Complete Karmarkar-Karp (CKK), that optimally solves the general number- partitioning problem, and significantly outperforms the best previously-known algorithms for large problem instances. For numbers with twelve significant digits or less, CKK can optimally solve two- way partitioning problems of arbitrary size in practice. For numbers with greater precision, CKK first returns the Karmarkx-Karp solution, then continues to find better solutions as time allows. Over seven orders of magnitude improvement in solution quality is obtained in less than an hour of running time. Rather than building a single solution one element at a time, or modifying a complete solution, CKK constructs subsolutions, and combines them together in all possible ways. This approach may be effective for other NP-hard problems as well. 0 1998 Elsevier Science B.V. All rights reserved. Keywords: Number partitioning; Anytime algorithm; NP-complete 1.
    [Show full text]
  • Arxiv:1612.01693V3 [Cs.DS] 12 Dec 2016 Iterates Through All Possibilities and Takes O(2N × N) Time for Execution
    Design and Evaluation of Alternate Enumeration Techniques for Subset Sum Problem Avni Verma and Kamalakar Karlapalem [email protected],[email protected] Data Science and Analytics Center (DSAC) International Institute of Information Technology, Hyderabad, India Abstract. The subset sum problem, also referred as SSP, is a NP-Hard computational problem. SSP has its applications in broad domains like cryptography, number theory, operation research and com- plexity theory. The most famous algorithm for solving SSP is Backtracking Algorithm which has ex- ponential time complexity. Therefore, our goal is to design and develop better alternate enumeration techniques for faster generation of SSP solutions. Given the set of first n natural numbers which is denoted by Xn and a target sum S, we propose various alternate enumeration techniques which find all the subsets of Xn that add up to sum S. In this paper, we present the mathematics behind this exponential problem. We analyze the distribution of power set of Xn and present formulas which show definite patterns and relations among these subsets. We introduce three major distributions for power set of Xn: Sum Distribution, Length-Sum Distribution and Element Distribution. These distributions are prepossessing procedures for various alternate enumeration techniques for solving SSP. We propose novel algorithms: Subset Generation using Sum Distribution, Subset Generation using Length-Sum Distribution, Basic Bucket Algorithm, Maximum and Minimum Frequency Driven Bucket Algorithms and Local Search using Maximal and Minimal Subsets for enumerating SSP. We compare the performance of these approaches against the traditional backtracking algorithm. The efficiency and effectiveness of these algorithms are presented with the help of these experimental results.
    [Show full text]
  • Combinatorial Algorithms for Subset Sum Problems
    Combinatorial Algorithms for Subset Sum Problems Dissertation an der Fakult¨atf¨urMathematik der Ruhr-Universit¨atBochum vorgelegt von Ilya Ozerov Erstgutachter: Prof. Dr. Alexander May Zweitgutachter: Prof. Dr. Gregor Leander Tag der m¨undlichen Pr¨ufung:05.02.2016 Contents 1 Introduction 1 2 Consistency Problem 5 2.1 High-Level Idea . .6 2.1.1 NN Problem . .6 2.1.2 zeroAND Problem . .9 2.1.3 A Joint Solution . 12 2.2 Consistency Problem . 15 2.2.1 Preliminaries . 15 2.2.2 Problem and Algorithm . 18 2.2.3 Analysis . 21 2.3 Weight Match Problem . 25 2.3.1 General Case . 25 2.3.2 Random Weight Match Problem . 27 2.4 Nearest Neighbor Problem . 28 2.4.1 Analysis . 29 3 Subset Sum Problem 33 3.1 Generalized Problem . 34 3.1.1 Brute Force . 34 3.1.2 Meet-in-the-Middle . 35 3.2 Random Subset Sum Problem . 36 3.2.1 Tools . 37 3.3 Known Results . 41 3.3.1 Meet-in-the-Middle Resivited . 41 3.3.2 Classical Representations . 42 3.4 Consistent Representations . 51 3.4.1 Group Weight Match Problem . 52 3.4.2 Algorithm . 54 4 Binary Subset Sum Problem 59 4.1 Known Results . 60 4.1.1 Meet-in-the-Middle . 60 4.1.2 Representations I . 60 4.1.3 Representations II . 64 4.2 Novel Results . 68 iv CONTENTS 4.2.1 Consistent Representations I . 68 4.2.2 Consistent Representations II . 71 4.3 Results in Special Groups . 74 4.3.1 Algorithms .
    [Show full text]
  • The Subset Sum Problem: Reducing Time Complexity of NP- Completeness with Quantum Search
    Undergraduate Journal of Mathematical Modeling: One + Two Volume 4 | 2012 Spring Article 2 2012 The Subset Sum Problem: Reducing Time Complexity of NP- Completeness with Quantum Search Bo Moon University of South Florida Advisors: Manoug Manougian, Mathematics and Statistics Jing Wang, Computer Science & Engineering Problem Suggested By: Jing Wang Follow this and additional works at: https://scholarcommons.usf.edu/ujmm Part of the Mathematics Commons UJMM is an open access journal, free to authors and readers, and relies on your support: Donate Now Recommended Citation Moon, Bo (2012) "The Subset Sum Problem: Reducing Time Complexity of NP-Completeness with Quantum Search," Undergraduate Journal of Mathematical Modeling: One + Two: Vol. 4: Iss. 2, Article 2. DOI: http://dx.doi.org/10.5038/2326-3652.4.2.2 Available at: https://scholarcommons.usf.edu/ujmm/vol4/iss2/2 The Subset Sum Problem: Reducing Time Complexity of NP-Completeness with Quantum Search Abstract The Subset Sum Problem is a member of the NP-complete class, so no known polynomial time algorithm exists for it. Although there are polynomial time approximations and heuristics, these are not always acceptable, yet exact-solution algorithms are unfeasible for large input. Quantum computation offers new insights for not only the Subset Sum Problem but also the entire NP-complete class; most notably, Grover's quantum algorithm for an unstructured database search can be tailored to identify solutions to problems within mathematics and computer science. This paper discusses the physical and conceptual feasibility of quantum computation and demonstrates the utility of quantum search by analyzing the time complexities of the classical dynamic programming algorithm and Grover's algorithm in solving the Subset Sum Problem, evincing the implications this has on the NP-complete class in general.
    [Show full text]
  • Optimal Multi-Way Number Partitioning
    University of California Los Angeles Optimal Multi-Way Number Partitioning A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Computer Science by Ethan L. Schreiber 2014 c Copyright by Ethan L. Schreiber 2014 Abstract of the Dissertation Optimal Multi-Way Number Partitioning by Ethan L. Schreiber Doctor of Philosophy in Computer Science University of California, Los Angeles, 2014 Professor Richard E. Korf, Chair The NP-hard number-partitioning problem is to separate a multiset S of n positive integers into k subsets, such that the largest sum of the integers assigned to any subset is minimized. The classic application is scheduling a set of n jobs with different run times onto k identical machines such that the makespan, the time to complete the schedule, is minimized. The two- way number-partitioning decision problem is one of the original 21 problems Richard Karp proved NP-complete. It is also one of Garey and Johnson’s six fundamental NP-complete problems, and the only one involving numbers. This thesis explores algorithms for solving multi-way number-partitioning problems op- timally. We explore previously existing algorithms as well as our own algorithms: sequential number partitioning (SNP), a branch-and-bound algorithm; binary-search improved bin com- pletion (BSIBC), a bin-packing algorithm; cached iterative weakening (CIW), an iterative weakening algorithm; and a variant of CIW, low cardinality search (LCS). We show experi- mentally that for high precision random problem instances, SNP, CIW and LCS are all state of the art algorithms depending on the values of n and k.
    [Show full text]