Dense Subset Sum May Be the Hardest Per Austrin1, Petteri Kaski2, Mikko Koivisto3, and Jesper Nederlof4 1 School of Computer Science and Communication, KTH Royal Institute of Technology, Sweden
[email protected] 2 Helsinki Institute for Information Technology HIIT & Department of Computer Science, Aalto University, Finland
[email protected] 3 Helsinki Institute for Information Technology HIIT & Department of Computer Science, University of Helsinki, Finland
[email protected] 4 Department of Mathematics and Computer Science, Technical University of Eindhoven, The Netherlands
[email protected] Abstract The Subset Sum problem asks whether a given set of n positive integers contains a subset of elements that sum up to a given target t. It is an outstanding open question whether the O∗(2n/2)-time algorithm for Subset Sum by Horowitz and Sahni [J. ACM 1974] can be beaten in the worst-case setting by a “truly faster”, O∗(2(0.5−δ)n)-time algorithm, with some constant δ > 0. Continuing an earlier work [STACS 2015], we study Subset Sum parameterized by the maximum bin size β, defined as the largest number of subsets of the n input integers that yield the same sum. For every > 0 we give a truly faster algorithm for instances with β ≤ 2(0.5−)n, as well as instances with β ≥ 20.661n. Consequently, we also obtain a characterization in terms of the popular density parameter n/ log2 t: if all instances of density at least 1.003 admit a truly faster algorithm, then so does every instance. This goes against the current intuition that instances of density 1 are the hardest, and therefore is a step toward answering the open question in the affirmative.