Chicago Journal of Theoretical Computer Science the MIT Press

Total Page:16

File Type:pdf, Size:1020Kb

Chicago Journal of Theoretical Computer Science the MIT Press Chicago Journal of Theoretical Computer Science The MIT Press Volume 1997, Article 1 12 March 1997 ISSN 1073–0486. MIT Press Journals, 55 Hayward St., Cambridge, MA 02142 USA; (617)253-2889; [email protected], [email protected]. Published one article at a time in LATEX source form on the Internet. Pag- ination varies from copy to copy. For more information and other articles see: http://www-mitpress.mit.edu/jrnls-catalog/chicago.html • http://www.cs.uchicago.edu/publications/cjtcs/ • ftp://mitpress.mit.edu/pub/CJTCS • ftp://cs.uchicago.edu/pub/publications/cjtcs • Feige and Kilian Limited vs. Polynomial Nondeterminism (Info) The Chicago Journal of Theoretical Computer Science is abstracted or in- R R R dexed in Research Alert, SciSearch, Current Contents /Engineering Com- R puting & Technology, and CompuMath Citation Index. c 1997 The Massachusetts Institute of Technology. Subscribers are licensed to use journal articles in a variety of ways, limited only as required to insure fair attribution to authors and the journal, and to prohibit use in a competing commercial product. See the journal’s World Wide Web site for further details. Address inquiries to the Subsidiary Rights Manager, MIT Press Journals; (617)253-2864; [email protected]. The Chicago Journal of Theoretical Computer Science is a peer-reviewed scholarly journal in theoretical computer science. The journal is committed to providing a forum for significant results on theoretical aspects of all topics in computer science. Editor in chief: Janos Simon Consulting editors: Joseph Halpern, Stuart A. Kurtz, Raimund Seidel Editors: Martin Abadi Greg Frederickson John Mitchell Pankaj Agarwal Andrew Goldberg Ketan Mulmuley Eric Allender Georg Gottlob Gil Neiger Tetsuo Asano Vassos Hadzilacos David Peleg Laszl´o Babai Juris Hartmanis Andrew Pitts Eric Bach Maurice Herlihy James Royer Stephen Brookes Ted Herman Alan Selman Jin-Yi Cai Stephen Homer Nir Shavit Anne Condon Neil Immerman Eva Tardos Cynthia Dwork Howard Karloff Sam Toueg David Eppstein Philip Klein Moshe Vardi Ronald Fagin Phokion Kolaitis Jennifer Welch Lance Fortnow Stephen Mahaney Pierre Wolper Steven Fortune Michael Merritt Managing editor: Michael J. O’Donnell Electronic mail: [email protected] [ii] Chicago Journal of Theoretical Computer Science 1997-1 On Limited versus Polynomial Nondeterminism Uriel Feige Joe Kilian 12 March, 1997 Abstract Abstract-1 In this paper, we show that efficient algorithms for some problems that require limited nondeterminism imply the subexponential simula- tion of nondeterministic computation by deterministic computation. In particular, if cliques of size O(log n) can be found in polynomial time, then nondeterministic time f(n) is contained in deterministic time 2O(√f(n) polylog f(n)). 1 Introduction 1-1 A major open question in computational complexity is whether P = NP. In other words, is it true that if a language can be recognized in time f(n) by a nondeterministic Turing machineL (where n denotes the input length), then can be recognized in time (f(n))c by a deterministic Tur- ing machine (forL some constant c)? Let DTIME(f(n)) denote the class of languages accepted by a deterministic Turing machine in time O(f(n)), and let NTIME(f(n)) denote the class of languages accepted by a non- deterministic Turing machine in time O(f(n)). The trivial relations be- tween these classes are DTIME(f(n)) NTIME(f(n)) (by definition), and ⊂ NTIME(f(n)) DTIME(2O(f(n))) (by exhaustive search). Perhaps the only ⊂ nontrivial relation known is that DTIME(n) = NTIME(n) [PPST83]. 6 1-2 A natural question to ask is whether there is a general methodology that improves over exhaustive search, and applies to a wide range of NP prob- lems. For some specific NP problems, improvements over exhaustive search that involve the constant in the exponent were obtained [TT77, SS79, BE95]. 1 Chicago Journal of Theoretical Computer Science 1997-1 Feige and Kilian Limited vs. Polynomial Nondeterminism 1 § However, we seek a more dramatic improvement, with more general applica- bility. This leads to the following question: Question 1 Is there a subexponential deterministic simulation of nondeter- ministic computation? More explicitly, is there some constant δ<1such that NTIME(f(n)) DTIME(2O(f(n)δ))? ⊂ 1-3 Certain NP problems require only a limited amount of nondeterminism, in the sense that their (natural) NP-witness is at most polylogarithmic in the input length. For such problems, improvements over exhaustive search are sometimes dramatic. Examples of problems that require limited nonde- terminism can be constructed by considering parameterized versions of NP optimization problems, in which the parameter k is restricted to be small (typically constant, or O(log n)). In the problem definitions below, k is a positive-integer input parameter. Path Instance: A graph G of order n. Question: Does G contain a simple path of length at least k? Vertex Cover Instance: A graph G of order n. Question: Does G contain a set of vertices of cardinality at most k that is incident with every edge of G? Clique Instance: A graph G of order n. Question: Does G contain a complete subgraph of order at least k? Monotone Circuit Satisfiability Instance: A monotone circuit C on n Boolean inputs. Question: Does C accept an input vector of Hamming weight at most k? 2 Chicago Journal of Theoretical Computer Science 1997-1 Feige and Kilian Limited vs. Polynomial Nondeterminism 1 § Tournament Dominating Set Instance: A digraph G =(V;E) of order n, in which for every pair of vertices u; v V , either (u; v) E or (v; u) E. ∈ ∈ ∈ Question: Find a set D V of minimum cardinality such that for each vertex v V , there is⊂ an edge (u; v) E for some vertex u D. ∈ ∈ ∈ VC Dimension Instance: A set U of cardinality n and a family of n subsets of U. F Question: Find a set S U of maximum cardinality that is shattered by . Set S is shattered⊂ by if for every subset T there is a set FA such that A S = TF. ⊂F ∈F ∩ We remark that the last two problems above are search problems, rather than decision problems, and also that there is no explicit parameter k involved. However, a parameter of k log n is implicit, since the cardinality of the optimal solution is at most log' n +1. 1-4 For the problems above, with parameter k = log n, an exhaustive search requires time of roughly nlog n. However, the first two of these problems have algorithms of time complexity O(nc2k), for some constant c, which is polyno- mial even for k = log n [PY96, DF95, AYZ95]. (Note that these two problems are NP-hard for a general k, since the Vertex Cover and Hamiltonian Path problems are.) Is this phenomenon of dramatically improving over exhaus- tive search a general one when problems that require a limited amount of nondeterminism are concerned? Does the same apply for the k-Clique prob- lem when k = O(log n)? These types of questions are treated by the theory of fixed parameter intractability [DF92], and motivate the introduction of the complexity class LOGSNP [PY96]. We shall return to discuss these notions in Section 4.2.1. For now, we shall concentrate on the log-Clique problem in which one must find a clique of size k log n in the input graph. ' Question 2 Is the log-Clique problem solvable in polynomial time? 1-5 Our main result is a connection between Question 2, that deals with a case of limited nondeterminism, and Question 1, that deals with polynomial nondeterminism. In Section 2 we show that if the answer to Question 2 is positive, then so is the answer to Question 1. In Section 3 we present 3 Chicago Journal of Theoretical Computer Science 1997-1 Feige and Kilian Limited vs. Polynomial Nondeterminism 2 § extensions that deal with the complexity of finding approximate solutions to the k-Clique problem, for small values of k. In Section 4 we discuss our results and their implications. In particular, we discuss related work showing (for the k-Monotone Circuit Satisfiability problem [ADF95]) or implying in conjunction with our work (for the Tournament Dominating Set and VC- Dimension problems [PY96]) additional problems of limited nondeterminism whose complexity is related with subexponential simulation of NP. 2 On the Complexity of the log-Clique Problem 2-1 The results of this section were first proven as corollaries of the results of Section 3. The elementary proofs given here were noticed by Noam Nisan, who permitted us to use them. 2-2 Lemma 1 Suppose there is a polynomial time algorithm for the k-Clique problem when k log n. Then the k-Clique problem is solvable in time O(nO(√n)), for every≤ k. c Prove Lemma 1-1 Proof of Lemma 1 Let be an algorithm that in time O(n ) solves the k-Clique problem for k Alog n. Let (G; k) be an input instance for the k- Clique problem (with arbitrarily≤ large k). Reduce (G; k) to a new instance (G;ˆ kˆ)ofthek-Clique problem as follows. Without loss of generality, assume that k is a perfect square. (Otherwise, let k0 be the least perfect square greater than k. Add k0 k vertices to G, connected to every other vertex.) ˆ −n ˆ Graph G contains N = √k vertices, where each vertex of G corresponds to a √k-subset of vertices of G. Two vertices of Gˆ are connected by an edge if the 2√k vertices that they correspond to are distinct and form a clique of size 2√k in G. Set kˆ = √k.
Recommended publications
  • Dagrep-V007-I011-Complete.Pdf
    Volume 7, Issue 11, November 2017 New Challenges in Parallelism (Dagstuhl Seminar 17451) Annette Bieniusa, Hans-J. Boehm, Maurice Herlihy, and Erez Petrank ........... 1 Algorithmic Cheminformatics (Dagstuhl Seminar 17452) Jakob L. Andersen, Christoph Flamm, Daniel Merkle, and Peter F. Stadler . 28 Connecting Visualization and Data Management Research (Dagstuhl Seminar 17461) Remco Chang, Jean-Daniel Fekete, Juliana Freire, and Carlos E. Scheidegger . 46 A Shared Challenge in Behavioural Specification (Dagstuhl Seminar 17462) Klaus Havelund, Martin Leucker, Giles Reger, and Volker Stolz . 59 Artificial and Computational Intelligence in Games: AI-Driven Game Design (Dagstuhl Seminar 17471) Pieter Spronck, Elisabeth André, Michael Cook, and Mike Preuß . 86 Addressing the Computational Challenges of Personalized Medicine (Dagstuhl Seminar 17472) Niko Beerenwinkel, Holger Fröhlich, and Susan A. Murphy . 130 Reliable Computation and Complexity on the Reals (Dagstuhl Seminar 17481) Norbert T. Müller, Siegfried M. Rump, Klaus Weihrauch, and Martin Ziegler . 142 DagstuhlReports,Vol. 7,Issue11 ISSN2192-5283 ISSN 2192-5283 Published online and open access by Aims and Scope Schloss Dagstuhl – Leibniz-Zentrum für Informatik The periodical Dagstuhl Reports documents the GmbH, Dagstuhl Publishing, Saarbrücken/Wadern, program and the results of Dagstuhl Seminars and Germany. Online available at Dagstuhl Perspectives Workshops. http://www.dagstuhl.de/dagpub/2192-5283 In principal, for each Dagstuhl Seminar or Dagstuhl Perspectives Workshop a report is published that Publication date contains the following: March, 2018 an executive summary of the seminar program and the fundamental results, Bibliographic information published by the Deutsche an overview of the talks given during the seminar Nationalbibliothek (summarized as talk abstracts), and The Deutsche Nationalbibliothek lists this publica- summaries from working groups (if applicable).
    [Show full text]
  • (L ,J SEP 3 0 2009 LIBRARIES
    Nearest Neighbor Search: the Old, the New, and the Impossible by Alexandr Andoni Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 2009 © Massachusetts Institute of Technology 2009. All rights reserved. Author .............. .. .. ......... .... ..... ... ......... ....... Department of Electrical Engineering and Computer Science September 4, 2009 (l ,J Certified by................ Piotr I/ yk Associate Professor Thesis Supervisor Accepted by ................................ /''~~ Terry P. Orlando Chairman, Department Committee on Graduate Students MASSACHUSETTS PaY OF TECHNOLOGY SEP 3 0 2009 ARCHIVES LIBRARIES Nearest Neighbor Search: the Old, the New, and the Impossible by Alexandr Andoni Submitted to the Department of Electrical Engineering and Computer Science on September 4, 2009, in partial fulfillment of the requirements for the degree of Doctor of Philosophy Abstract Over the last decade, an immense amount of data has become available. From collections of photos, to genetic data, and to network traffic statistics, modern technologies and cheap storage have made it possible to accumulate huge datasets. But how can we effectively use all this data? The ever growing sizes of the datasets make it imperative to design new algorithms capable of sifting through this data with extreme efficiency. A fundamental computational primitive for dealing with massive dataset is the Nearest Neighbor (NN) problem. In the NN problem, the goal is to preprocess a set of objects, so that later, given a query object, one can find efficiently the data object most similar to the query. This problem has a broad set of applications in data processing and analysis.
    [Show full text]
  • Approximation, Randomization, and Combinatorial Optimization
    Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques 17th International Workshop, APPROX 2014, and 18th International Workshop, RANDOM 2014 September 4–6, 2014, Barcelona, Spain Edited by Klaus Jansen José D. P. Rolim Nikhil R. Devanur Cristopher Moore LIPIcs – Vol. 28 – APPROX/RANDOM’14 www.dagstuhl.de/lipics Editors Klaus Jansen José D. P. Rolim University of Kiel University of Geneva Kiel Geneva [email protected] [email protected] Nikhil R. Devanur Cristopher Moore Microsoft Research Santa Fe Institute Redmond New Mexico [email protected] [email protected] ACM Classification 1998 C.2.1 Network Architecture and Design, C.2.2 Computer-communication, E.4 Coding and Information Theory, F. Theory of Computation, F.1.0 Computation by Abstract Devices, F.1.1 Models of Computation – relations between models, F.1.2 Modes of Computation, F.1.3 Complexity Measures and Classes, F.2.0 Analysis of Algorithms and Problem Complexity, F.2.1 Numerical Algorithms and Problems, F.2.2 Nonnumerical Algorithms and Problems G.1.2 Approximation, G.1.6 Optimization, G.2 Discrete Mathematics, G.2.1 Combinatorics, G.2.2 Graph Theory, G.3 Probability and Statistics, I.1.2 Algorithms, J.4 Computer Applications – Social and Behavioral Sciences ISBN 978-3-939897-74-3 Published online and open access by Schloss Dagstuhl – Leibniz-Zentrum für Informatik GmbH, Dagstuhl Publishing, Saarbrücken/Wadern, Germany. Online available at http://www.dagstuhl.de/dagpub/978-3-939897-74-3. Publication date September, 2014 Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available in the Internet at http://dnb.d-nb.de.
    [Show full text]
  • Uriel Feige, Publications, July 2017. Papers Are Sorted by Categories. for Papers That Have More Than One Version (Typically, Jo
    Uriel Feige, Publications, July 2017. Papers are sorted by categories. For papers that have more than one version (typically, journal version and conference proceedings), the different versions are combined into one entry in the list. STOC is shorthand for \Annual ACM Symposium on the Theory of Computing". FOCS is shorthand for \Symposium on Foundations of Computer Science". Journal publications 1. Uriel Feige, Amos Fiat, and Adi Shamir. \Zero Knowledge Proofs of Identity". Jour- nal of Cryptology, 1988, Vol. 1, pp. 77-94. Preliminary version appeared in Proc. of the 19th STOC, 1987, pp. 210-217. 2. Uriel Feige, David Peleg, Prabhakar Raghavan and Eli Upfal. \Randomized Broadcast in Networks". Random Structures and algorithms, 1(4), 1990, pp. 447-460. Preliminary version appeared in proceedings of SIGAL International Symposium on Algorithms, Tokyo, Japan, August 1990. 3. Uriel Feige and Adi Shamir. \Multi-Oracle Interactive Protocols with Constant Space Verifiers”. Journal of Computer and System Sciences, Vol. 44, pp. 259-271, April 1992. Preliminary version appeared in Proceedings of Fourth Annual Conference on Struc- ture in Complexity Theory, June 1989. 4. Uriel Feige. "On the Complexity of Finite Random Functions". Information Process- ing Letters 44 (1992) 295-296. 5. Uriel Feige, David Peleg, Prabhakar Raghavan and Eli Upfal. \Computing With Noisy Information". SIAM Journal on Computing, 23(5), 1001{1018, 1994. Preliminary version appeared in Proceedings of 22nd STOC, 1990, pp. 128-137. 6. Noga Alon, Uriel Feige, Avi Wigderson, and David Zuckerman. "Derandomized Graph Products". Computational Complexity 5 (1995) 60{75. 7. Uriel Feige. \A Tight Upper Bound on the Cover Time for Random Walks on Graphs".
    [Show full text]
  • László Lovász Avi Wigderson of Eötvös Loránd University of the Institute for Advanced Study, in Budapest, Hungary and Princeton, USA
    2021 The Norwegian Academy of Science and Letters has decided to award the Abel Prize for 2021 to László Lovász Avi Wigderson of Eötvös Loránd University of the Institute for Advanced Study, in Budapest, Hungary and Princeton, USA, “for their foundational contributions to theoretical computer science and discrete mathematics, and their leading role in shaping them into central fields of modern mathematics.” Theoretical Computer Science (TCS) is the study of computational lens”. Discrete structures such as the power and limitations of computing. Its roots go graphs, strings, permutations are central to TCS, back to the foundational works of Kurt Gödel, Alonzo and naturally discrete mathematics and TCS Church, Alan Turing, and John von Neumann, leading have been closely allied fields. While both these to the development of real physical computers. fields have benefited immensely from more TCS contains two complementary sub-disciplines: traditional areas of mathematics, there has been algorithm design which develops efficient methods growing influence in the reverse direction as well. for a multitude of computational problems; and Applications, concepts, and techniques from TCS computational complexity, which shows inherent have motivated new challenges, opened new limitations on the efficiency of algorithms. The notion directions of research, and solved important open of polynomial-time algorithms put forward in the problems in pure and applied mathematics. 1960s by Alan Cobham, Jack Edmonds, and others, and the famous P≠NP conjecture of Stephen Cook, László Lovász and Avi Wigderson have been leading Leonid Levin, and Richard Karp had strong impact on forces in these developments over the last decades. the field and on the work of Lovász and Wigderson.
    [Show full text]
  • Combinatorial Topology and Distributed Computing Copyright 2010 Herlihy, Kozlov, and Rajsbaum All Rights Reserved
    Combinatorial Topology and Distributed Computing Copyright 2010 Herlihy, Kozlov, and Rajsbaum All rights reserved Maurice Herlihy Dmitry Kozlov Sergio Rajsbaum February 22, 2011 DRAFT 2 DRAFT Contents 1 Introduction 9 1.1 DecisionTasks .......................... 10 1.2 Communication.......................... 11 1.3 Failures .............................. 11 1.4 Timing............................... 12 1.4.1 ProcessesandProtocols . 12 1.5 ChapterNotes .......................... 14 2 Elements of Combinatorial Topology 15 2.1 Theobjectsandthemaps . 15 2.1.1 The Combinatorial View . 15 2.1.2 The Geometric View . 17 2.1.3 The Topological View . 18 2.2 Standardconstructions. 18 2.3 Chromaticcomplexes. 21 2.4 Simplicial models in Distributed Computing . 22 2.5 ChapterNotes .......................... 23 2.6 Exercises ............................. 23 3 Manifolds, Impossibility,DRAFT and Separation 25 3.1 ManifoldComplexes ....................... 25 3.2 ImmediateSnapshots. 28 3.3 ManifoldProtocols .. .. .. .. .. .. .. 34 3.4 SetAgreement .......................... 34 3.5 AnonymousProtocols . .. .. .. .. .. .. 38 3.6 WeakSymmetry-Breaking . 39 3.7 Anonymous Set Agreement versus Weak Symmetry Breaking 40 3.8 ChapterNotes .......................... 44 3.9 Exercises ............................. 44 3 4 CONTENTS 4 Connectivity 47 4.1 Consensus and Path-Connectivity . 47 4.2 Consensus in Asynchronous Read-Write Memory . 49 4.3 Set Agreement and Connectivity in Higher Dimensions . 53 4.4 Set Agreement and Read-Write memory . 59 4.4.1 Critical States . 63 4.5 ChapterNotes .......................... 64 4.6 Exercises ............................. 64 5 Colorless Tasks 67 5.1 Pseudospheres .......................... 68 5.2 ColorlessTasks .......................... 72 5.3 Wait-Free Read-Write Memory . 73 5.3.1 Read-Write Protocols and Pseudospheres . 73 5.3.2 Necessary and Sufficient Conditions . 75 5.4 Read-Write Memory with k-Set Agreement .
    [Show full text]
  • How Well Do Random Walks Parallelize?
    How Well Do Random Walks Parallelize? Klim Efremenko ∗ Omer Reingold † August 24, 2009 Abstract A random walk on a graph is a process that explores the graph in a random way: at each step the walk is at a vertex of the graph, and at each step it moves to a uniformly selected neighbor of this vertex. Random walks are extremely useful in computer science and in other fields. A very natural problem that was recently raised by Alon, Avin, Koucky, Kozma, Lotker, and Tuttle (though it was implicit in several previous papers) is to analyze the behavior of k independent walks in comparison with the behavior of a single walk. In particular, Alon et al. showed that in various settings (e.g., for expander graphs), k random walks cover the graph (i.e., visit all its nodes), Ω(k)-times faster (in expectation) than a single walk. In other words, in such cases k random walks efficiently “parallelize” a single random walk. Alon et al. also demonstrated that, depending on the specific setting, this “speedup” can vary from logarithmic to exponential in k. In this paper we initiate a more systematic study of multiple random walks. We give lower and upper bounds both on the cover time and on the hitting time (the time it takes to hit one specific node) of multiple random walks. Our study revolves over three alternatives for the starting vertices of the random walks: the worst starting vertices (those who maximize the hitting/cover time), the best starting vertices, and starting vertices selected from the stationary distribution.
    [Show full text]
  • Classical Cryptographic Protocols in a Quantum World∗
    Classical Cryptographic Protocols in a Quantum World∗ Sean Hallgren †‡ Adam Smith †§ Fang Song ¶ Abstract Cryptographic protocols, such as protocols for secure function evaluation (SFE), have played a cru- cial role in the development of modern cryptography. The extensive theory of these protocols, however, deals almost exclusively with classical attackers. If we accept that quantum information processing is the most realistic model of physically feasible computation, then we must ask: what classical protocols remain secure against quantum attackers? Our main contribution is showing the existence of classical two-party protocols for the secure eval- uation of any polynomial-time function under reasonable computational assumptions (for example, it suffices that the learning with errors problem be hard for quantum polynomial time). Our result shows that the basic two-party feasibility picture from classical cryptography remains unchanged in a quantum world. ∗A preliminary version of this work appeared in Advances in Cryptology - CRYPTO 2011. †Department of Computer Science and Engineering, Pennsylvania State University, University Park, PA, U.S.A. ‡Partially supported by National Science Foundation award CCF-0747274 and by the National Security Agency (NSA) under Army Research Office (ARO) contract number W911NF-08-1-0298. §Partially supported by National Science Foundation award CCF-0747294. ¶Department of Combinatorics & Optimization and Institute for Quantum Computing, University of Waterloo, Canada. Partially supported by Cryptoworks21, NSERC, ORF and US ARO. Most work was conducted while at the Pennsylvania State University. 1 Contents 1 Introduction 3 1.1 Our Contributions . .3 1.2 Related work . .5 1.3 Future Directions . .6 2 Preliminaries 7 3 Modeling Security in the Presence of Quantum Attacks 9 3.1 A General Quantum Stand-Alone Security Model .
    [Show full text]
  • Lipics-APPROX-RANDOM-2019-0
    Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques APPROX/RANDOM 2019, September 20–22, 2019, Massachusetts Institute of Technology, Cambridge, MA, USA Edited by Dimitris Achlioptas László A. Végh L I P I c s – Vo l . 145 – APPROX/RANDOM 2019 www.dagstuhl.de/lipics Editors Dimitris Achlioptas UC Santa Cruz, California, USA [email protected] László A. Végh London School of Economics and Political Science, London, UK [email protected] ACM Classification 2012 Mathematics of computing; Theory of computation ISBN 978-3-95977-125-2 Published online and open access by Schloss Dagstuhl – Leibniz-Zentrum für Informatik GmbH, Dagstuhl Publishing, Saarbrücken/Wadern, Germany. Online available at https://www.dagstuhl.de/dagpub/978-3-95977-125-2. Publication date September, 2019 Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available in the Internet at https://portal.dnb.de. License This work is licensed under a Creative Commons Attribution 3.0 Unported license (CC-BY 3.0): https://creativecommons.org/licenses/by/3.0/legalcode. In brief, this license authorizes each and everybody to share (to copy, distribute and transmit) the work under the following conditions, without impairing or restricting the authors’ moral rights: Attribution: The work must be attributed to its authors. The copyright is retained by the corresponding authors. Digital Object Identifier: 10.4230/LIPIcs.APPROX-RANDOM.2019.0 ISBN 978-3-95977-125-2 ISSN 1868-8969 https://www.dagstuhl.de/lipics 0:iii LIPIcs – Leibniz International Proceedings in Informatics LIPIcs is a series of high-quality conference proceedings across all fields in informatics.
    [Show full text]
  • Tight Bounds for K-Set Agreement Soma Chaudhuri Maurice Herlihy CRL 98/4 Nancy A
    TM Tight Bounds for k-Set Agreement Soma Chaudhuri Maurice Herlihy Nancy A. Lynch Mark R. Tuttle CRL 98/4 May 1998 Cambridge Research Laboratory The Cambridge Research Laboratory was founded in 1987 to advance the state of the art in both core computing and human-computerinteraction, and to use the knowledge so gained to support the Company’s corporate objectives. We believe this is best accomplished through interconnected pur- suits in technology creation, advanced systems engineering, and business development. We are ac- tively investigating scalable computing; mobile computing; vision-based human and scene sensing; speech interaction; computer-animated synthetic persona; intelligent information appliances; and the capture, coding, storage, indexing, retrieval, decoding, and rendering of multimedia data. We recognize and embrace a technology creation model which is characterized by three major phases: Freedom: The life blood of the Laboratory comes from the observations and imaginations of our research staff. It is here that challenging research problems are uncovered (through discussions with customers, through interactions with others in the Corporation, through other professional interac- tions, through reading, and the like) or that new ideas are born. For any such problem or idea, this phase culminates in the nucleation of a project team around a well articulated central research question and the outlining of a research plan. Focus: Once a team is formed, we aggressively pursue the creation of new technology based on the plan. This may involve direct collaboration with other technical professionals inside and outside the Corporation. This phase culminates in the demonstrable creation of new technology which may take any of a number of forms - a journal article, a technical talk, a working prototype, a patent application, or some combination of these.
    [Show full text]
  • A Complexity-Based Hierarchy for Multiprocessor Synchronization
    A Complexity-Based Hierarchy for Multiprocessor Synchronization Faith Ellen Rati Gelashvili Nir Shavit University of Toronto MIT MIT [email protected] [email protected] [email protected] Leqi Zhu University of Toronto [email protected] Abstract For many years, Herlihy’s elegant computability-based Consensus Hierarchy has been our best ex- planation of the relative power of various types of multiprocessor synchronization objects when used in deterministic algorithms. However, key to this hierarchy is treating synchronization instructions as distinct objects, an approach that is far from the real-world, where multiprocessor programs apply syn- chronization instructions to collections of arbitrary memory locations. We were surprised to realize that, when considering instructions applied to memory locations, the computability based hierarchy collapses. This leaves open the question of how to better capture the power of various synchronization instructions. In this paper, we provide an approach to answering this question. We present a hierarchy of synchro- nization instructions, classified by the space complexity necessary to solve consensus in an obstruction- free manner using these instructions. Our hierarchy provides a classification of combinations of known instructions that seems to fit with our intuition of how useful some are in practice, while questioning the effectiveness of others. In particular, we prove an essentially tight characterization of the power of buffered read and write instructions. Interestingly, we show a similar result for multi-location atomic assignments. 1 Introduction Herlihy’s Consensus Hierarchy [Her91] assigns a consensus number to each object, namely, the number of processes for which there is a wait-free binary consensus algorithm using only instances of this object and read-write registers.
    [Show full text]
  • Hardness of Approximation, NP ⊆ PCP[Polylog,Polylog]
    CS221: Computational Complexity Prof. Salil Vadhan Lecture 34: More Hardness of Approximation, NP ⊆ PCP[polylog; polylog] 12/16 Scribe: Vitaly Feldman Contents 1 More Tight Hardness of Approximation Results 1 2 Open Problems 2 3 Sketch of the Proof of the PCP Theorem 2 Additional Readings. Papadimitriou, Ch. 13 has a detailed treatment of approximability, with less emphasis on the PCP theorem (which is stated in different language in Thm 13.12). 1 More Tight Hardness of Approximation Results Last time we saw the tight results for hardness of approximation of Max3SAT and E3LIN2. Today we will survey some other tight hardness of approximation results. Definition 1 MaxClique — given a graph G = (V; E) find the largest clique. This problem is completely equivalent to finding the maximum independent set. Best known approximation algorithm gives n -approximation (which is not much better than the trivial log2 n n-approximation). Theorem 2 (H˚astad) It is NP-hard to approximate MaxClique within n1¡² factor for any positive ². The result is proven by tailoring the PCP Theorem for this specific problem (namely, optimizing the so-called “amortized free bit complexity” of the PCP). Definition 3 Set Cover problem: given a collection of sets S1;:::;Sm which are subsets of the set f1; : : : ; ng, find the smallest subcollection of sets Si1 ;:::;Sit whose union is equal to f1; : : : ; ng (that is, they cover the “universe”). The greedy algorithm gives H(n) ¼ ln n approximation to this problem (H(n) is the n-th harmonic number). 1 Theorem 4 (Feige) It is NP-hard to approximate Set Cover to within (1 ¡ ²) ln n factor for any positive ².
    [Show full text]