On the Work of Madhu Sudan: the 2002 Nevalinna Prize Winner 107
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
COMPLEXITY THEORY Review Lecture 13: Space Hierarchy and Gaps
COMPLEXITY THEORY Review Lecture 13: Space Hierarchy and Gaps Markus Krotzsch¨ Knowledge-Based Systems TU Dresden, 26th Dec 2018 Markus Krötzsch, 26th Dec 2018 Complexity Theory slide 2 of 19 Review: Time Hierarchy Theorems Time Hierarchy Theorem 12.12 If f , g : N N are such that f is time- → constructible, and g log g o(f ), then · ∈ DTime (g) ( DTime (f ) ∗ ∗ Nondeterministic Time Hierarchy Theorem 12.14 If f , g : N N are such that f → is time-constructible, and g(n + 1) o(f (n)), then A Hierarchy for Space ∈ NTime (g) ( NTime (f ) ∗ ∗ In particular, we find that P , ExpTime and NP , NExpTime: , L NL P NP PSpace ExpTime NExpTime ExpSpace ⊆ ⊆ ⊆ ⊆ ⊆ ⊆ ⊆ , Markus Krötzsch, 26th Dec 2018 Complexity Theory slide 3 of 19 Markus Krötzsch, 26th Dec 2018 Complexity Theory slide 4 of 19 Space Hierarchy Proving the Space Hierarchy Theorem (1) Space Hierarchy Theorem 13.1: If f , g : N N are such that f is space- → constructible, and g o(f ), then For space, we can always assume a single working tape: ∈ Tape reduction leads to a constant-factor increase in space • DSpace(g) ( DSpace(f ) Constant factors can be eliminated by space compression • Proof: Again, we construct a diagonalisation machine . We define a multi-tape TM Therefore, DSpacek(f ) = DSpace1(f ). D D for inputs of the form , w (other cases do not matter), assuming that , w = n hM i |hM i| Compute f (n) in unary to mark the available space on the working tape Space turns out to be easier to separate – we get: • Space Hierarchy Theorem 13.1: If f , g : N N are such that f is space- Initialise a separate countdown tape with the largest binary number that can be → • constructible, and g o(f ), then written in f (n) space ∈ Simulate on , w , making sure that only previously marked tape cells are • M hM i DSpace(g) ( DSpace(f ) used Time-bound the simulation using the content of the countdown tape by • Challenge: TMs can run forever even within bounded space. -
Reproducibility and Pseudo-Determinism in Log-Space
Reproducibility and Pseudo-determinism in Log-Space by Ofer Grossman S.B., Massachusetts Institute of Technology (2017) Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering and Computer Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY May 2020 c Massachusetts Institute of Technology 2020. All rights reserved. Author...................................................................... Department of Electrical Engineering and Computer Science May 15, 2020 Certified by.................................................................. Shafi Goldwasser RSA Professor of Electrical Engineering and Computer Science Thesis Supervisor Accepted by................................................................. Leslie A. Kolodziejski Professor of Electrical Engineering and Computer Science Chair, Department Committee on Graduate Students 2 Reproducibility and Pseudo-determinism in Log-Space by Ofer Grossman Submitted to the Department of Electrical Engineering and Computer Science on May 15, 2020, in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering and Computer Science Abstract Acuriouspropertyofrandomizedlog-spacesearchalgorithmsisthattheiroutputsareoften longer than their workspace. This leads to the question: how can we reproduce the results of a randomized log space computation without storing the output or randomness verbatim? Running the algorithm again with new -
Ab Pcpchap.Pdf
i Computational Complexity: A Modern Approach Sanjeev Arora and Boaz Barak Princeton University http://www.cs.princeton.edu/theory/complexity/ [email protected] Not to be reproduced or distributed without the authors’ permission ii Chapter 11 PCP Theorem and Hardness of Approximation: An introduction “...most problem reductions do not create or preserve such gaps...To create such a gap in the generic reduction (cf. Cook)...also seems doubtful. The in- tuitive reason is that computation is an inherently unstable, non-robust math- ematical object, in the the sense that it can be turned from non-accepting to accepting by changes that would be insignificant in any reasonable metric.” Papadimitriou and Yannakakis [PY88] This chapter describes the PCP Theorem, a surprising discovery of complexity theory, with many implications to algorithm design. Since the discovery of NP-completeness in 1972 researchers had mulled over the issue of whether we can efficiently compute approxi- mate solutions to NP-hard optimization problems. They failed to design such approxima- tion algorithms for most problems (see Section 1.1 for an introduction to approximation algorithms). They then tried to show that computing approximate solutions is also hard, but apart from a few isolated successes this effort also stalled. Researchers slowly began to realize that the Cook-Levin-Karp style reductions do not suffice to prove any limits on ap- proximation algorithms (see the above quote from an influential Papadimitriou-Yannakakis paper that appeared a few years before the discoveries described in this chapter). The PCP theorem, discovered in 1992, gave a new definition of NP and provided a new starting point for reductions. -
Fault-Tolerant Distributed Computing in Full-Information Networks
Fault-Tolerant Distributed Computing in Full-Information Networks Shafi Goldwasser∗ Elan Pavlov Vinod Vaikuntanathan∗ CSAIL, MIT MIT CSAIL, MIT Cambridge MA, USA Cambridge MA, USA Cambridge MA, USA December 15, 2006 Abstract In this paper, we use random-selection protocols in the full-information model to solve classical problems in distributed computing. Our main results are the following: • An O(log n)-round randomized Byzantine Agreement (BA) protocol in a synchronous full-information n network tolerating t < 3+ faulty players (for any constant > 0). As such, our protocol is asymp- totically optimal in terms of fault-tolerance. • An O(1)-round randomized BA protocol in a synchronous full-information network tolerating t = n O( (log n)1.58 ) faulty players. • A compiler that converts any randomized protocol Πin designed to tolerate t fail-stop faults, where the n source of randomness of Πin is an SV-source, into a protocol Πout that tolerates min(t, 3 ) Byzantine ∗ faults. If the round-complexity of Πin is r, that of Πout is O(r log n). Central to our results is the development of a new tool, “audited protocols”. Informally “auditing” is a transformation that converts any protocol that assumes built-in broadcast channels into one that achieves a slightly weaker guarantee, without assuming broadcast channels. We regard this as a tool of independent interest, which could potentially find applications in the design of simple and modular randomized distributed algorithms. ∗Supported by NSF grants CNS-0430450 and CCF0514167. 1 1 Introduction The problem of how n players, some of who may be faulty, can make a common random selection in a set, has received much attention. -
Challenges in Web Search Engines
Challenges in Web Search Engines Monika R. Henzinger Rajeev Motwani* Craig Silverstein Google Inc. Department of Computer Science Google Inc. 2400 Bayshore Parkway Stanford University 2400 Bayshore Parkway Mountain View, CA 94043 Stanford, CA 94305 Mountain View, CA 94043 [email protected] [email protected] [email protected] Abstract or a combination thereof. There are web ranking optimiza• tion services which, for a fee, claim to place a given web site This article presents a high-level discussion of highly on a given search engine. some problems that are unique to web search en• Unfortunately, spamming has become so prevalent that ev• gines. The goal is to raise awareness and stimulate ery commercial search engine has had to take measures to research in these areas. identify and remove spam. Without such measures, the qual• ity of the rankings suffers severely. Traditional research in information retrieval has not had to 1 Introduction deal with this problem of "malicious" content in the corpora. Quite certainly, this problem is not present in the benchmark Web search engines are faced with a number of difficult prob• document collections used by researchers in the past; indeed, lems in maintaining or enhancing the quality of their perfor• those collections consist exclusively of high-quality content mance. These problems are either unique to this domain, or such as newspaper or scientific articles. Similarly, the spam novel variants of problems that have been studied in the liter• problem is not present in the context of intranets, the web that ature. Our goal in writing this article is to raise awareness of exists within a corporation. -
On the NP-Completeness of the Minimum Circuit Size Problem
On the NP-Completeness of the Minimum Circuit Size Problem John M. Hitchcock∗ A. Pavany Department of Computer Science Department of Computer Science University of Wyoming Iowa State University Abstract We study the Minimum Circuit Size Problem (MCSP): given the truth-table of a Boolean function f and a number k, does there exist a Boolean circuit of size at most k computing f? This is a fundamental NP problem that is not known to be NP-complete. Previous work has studied consequences of the NP-completeness of MCSP. We extend this work and consider whether MCSP may be complete for NP under more powerful reductions. We also show that NP-completeness of MCSP allows for amplification of circuit complexity. We show the following results. • If MCSP is NP-complete via many-one reductions, the following circuit complexity amplifi- Ω(1) cation result holds: If NP\co-NP requires 2n -size circuits, then ENP requires 2Ω(n)-size circuits. • If MCSP is NP-complete under truth-table reductions, then EXP 6= NP \ SIZE(2n ) for some > 0 and EXP 6= ZPP. This result extends to polylog Turing reductions. 1 Introduction Many natural NP problems are known to be NP-complete. Ladner's theorem [14] tells us that if P is different from NP, then there are NP-intermediate problems: problems that are in NP, not in P, but also not NP-complete. The examples arising out of Ladner's theorem come from diagonalization and are not natural. A canonical candidate example of a natural NP-intermediate problem is the Graph Isomorphism (GI) problem. -
The Multiplicative Weights Update Method: a Meta Algorithm and Applications
The Multiplicative Weights Update Method: a Meta Algorithm and Applications Sanjeev Arora∗ Elad Hazan Satyen Kale Abstract Algorithms in varied fields use the idea of maintaining a distribution over a certain set and use the multiplicative update rule to iteratively change these weights. Their analysis are usually very similar and rely on an exponential potential function. We present a simple meta algorithm that unifies these disparate algorithms and drives them as simple instantiations of the meta algo- rithm. 1 Introduction Algorithms in varied fields work as follows: a distribution is maintained on a certain set, and at each step the probability assigned to i is multi- plied or divided by (1 + C(i)) where C(i) is some kind of “payoff” for element i. (Rescaling may be needed to ensure that the new values form a distribution.) Some examples include: the Ada Boost algorithm in ma- chine learning [FS97]; algorithms for game playing studied in economics (see below), the Plotkin-Shmoys-Tardos algorithm for packing and covering LPs [PST91], and its improvements in the case of flow problems by Young, Garg-Konneman, and Fleischer [You95, GK98, Fle00]; Impagliazzo’s proof of the Yao XOR lemma [Imp95], etc. The analysis of the running time uses a potential function argument and the final running time is proportional to 1/2. It has been clear to most researchers that these results are very similar, see for instance, Khandekar’s PhD thesis [Kha04]. Here we point out that these are all instances of the same (more general) algorithm. This meta ∗This project supported by David and Lucile Packard Fellowship and NSF grant CCR- 0205594. -
The Complexity Zoo
The Complexity Zoo Scott Aaronson www.ScottAaronson.com LATEX Translation by Chris Bourke [email protected] 417 classes and counting 1 Contents 1 About This Document 3 2 Introductory Essay 4 2.1 Recommended Further Reading ......................... 4 2.2 Other Theory Compendia ............................ 5 2.3 Errors? ....................................... 5 3 Pronunciation Guide 6 4 Complexity Classes 10 5 Special Zoo Exhibit: Classes of Quantum States and Probability Distribu- tions 110 6 Acknowledgements 116 7 Bibliography 117 2 1 About This Document What is this? Well its a PDF version of the website www.ComplexityZoo.com typeset in LATEX using the complexity package. Well, what’s that? The original Complexity Zoo is a website created by Scott Aaronson which contains a (more or less) comprehensive list of Complexity Classes studied in the area of theoretical computer science known as Computa- tional Complexity. I took on the (mostly painless, thank god for regular expressions) task of translating the Zoo’s HTML code to LATEX for two reasons. First, as a regular Zoo patron, I thought, “what better way to honor such an endeavor than to spruce up the cages a bit and typeset them all in beautiful LATEX.” Second, I thought it would be a perfect project to develop complexity, a LATEX pack- age I’ve created that defines commands to typeset (almost) all of the complexity classes you’ll find here (along with some handy options that allow you to conveniently change the fonts with a single option parameters). To get the package, visit my own home page at http://www.cse.unl.edu/~cbourke/. -
University Microfilms International 300 North Zeeb Road Ann Arbor, Michigan 48106 USA St John's Road
INFORMATION TO USERS This material was produced from a microfilm copy of the original document. While the most advanced technological means to photograph and reproduce this document have been used, the quality is heavily dependant upon the quality of the original submitted. The following explanation of techniques is provided to help you understand markings or patterns which may appear on this reproduction. 1. The sign or "target" for pages apparently lacking from the document photographed is "Missing Page(s)". If it was possible to obtain the missing page(s) or section, they are spliced into the film along with adjacent pages. This may have necessitated cutting thru an image and duplicating adjacent pages to insure you complete continuity. 2. When an image on the film is obliterated with a large round black mark, it is an indication that the photographer suspected that the copy may have moved during exposure and thus cause a blurred image. You w ill find a good image of the page in the adjacent frame. 3. When a map, drawing or chart, etc., was part of the material being photographed the photographer followed a definite method in "sectioning" the material. It is customary to begin photoing at the upper left hand corner of a large sheet and to continue photoing from left to right in equal sections with a small overlap. If necessary, sectioning is continued again — beginning below the first row and continuing on until complete. 4. The majority of users indicate that the textual content is of greatest value, however, a somewhat higher quality reproduction could be made from "photographs" if essential to the understanding of the dissertation. -
Lecture 19,20 (Nov 15&17, 2011): Hardness of Approximation, PCP
,20 CMPUT 675: Approximation Algorithms Fall 2011 Lecture 19,20 (Nov 15&17, 2011): Hardness of Approximation, PCP theorem Lecturer: Mohammad R. Salavatipour Scribe: based on older notes 19.1 Hardness of Approximation So far we have been mostly talking about designing approximation algorithms and proving upper bounds. From no until the end of the course we will be talking about proving lower bounds (i.e. hardness of approximation). We are familiar with the theory of NP-completeness. When we prove that a problem is NP-hard it implies that, assuming P=NP there is no polynomail time algorithm that solves the problem (exactly). For example, for SAT, deciding between Yes/No is hard (again assuming P=NP). We would like to show that even deciding between those instances that are (almost) satisfiable and those that are far from being satisfiable is also hard. In other words, create a gap between Yes instances and No instances. These kinds of gaps imply hardness of approximation for optimization version of NP-hard problems. In fact the PCP theorem (that we will see next lecture) is equivalent to the statement that MAX-SAT is NP- hard to approximate within a factor of (1 + ǫ), for some fixed ǫ> 0. Most of hardness of approximation results rely on PCP theorem. For proving a hardness, for example for vertex cover, PCP shows that the existence of following reduction: Given a formula ϕ for SAT, we build a graph G(V, E) in polytime such that: 2 • if ϕ is a yes-instance, then G has a vertex cover of size ≤ 3 |V |; 2 • if ϕ is a no-instance, then every vertex cover of G has a size > α 3 |V | for some fixed α> 1. -
Relations and Equivalences Between Circuit Lower Bounds and Karp-Lipton Theorems*
Electronic Colloquium on Computational Complexity, Report No. 75 (2019) Relations and Equivalences Between Circuit Lower Bounds and Karp-Lipton Theorems* Lijie Chen Dylan M. McKay Cody D. Murray† R. Ryan Williams MIT MIT MIT Abstract A frontier open problem in circuit complexity is to prove PNP 6⊂ SIZE[nk] for all k; this is a neces- NP sary intermediate step towards NP 6⊂ P=poly. Previously, for several classes containing P , including NP NP NP , ZPP , and S2P, such lower bounds have been proved via Karp-Lipton-style Theorems: to prove k C 6⊂ SIZE[n ] for all k, we show that C ⊂ P=poly implies a “collapse” D = C for some larger class D, where we already know D 6⊂ SIZE[nk] for all k. It seems obvious that one could take a different approach to prove circuit lower bounds for PNP that does not require proving any Karp-Lipton-style theorems along the way. We show this intuition is wrong: (weak) Karp-Lipton-style theorems for PNP are equivalent to fixed-polynomial size circuit lower NP NP k NP bounds for P . That is, P 6⊂ SIZE[n ] for all k if and only if (NP ⊂ P=poly implies PH ⊂ i.o.-P=n ). Next, we present new consequences of the assumption NP ⊂ P=poly, towards proving similar re- sults for NP circuit lower bounds. We show that under the assumption, fixed-polynomial circuit lower bounds for NP, nondeterministic polynomial-time derandomizations, and various fixed-polynomial time simulations of NP are all equivalent. Applying this equivalence, we show that circuit lower bounds for NP imply better Karp-Lipton collapses. -
Chicago Journal of Theoretical Computer Science the MIT Press
Chicago Journal of Theoretical Computer Science The MIT Press Volume 1997, Article 1 12 March 1997 ISSN 1073–0486. MIT Press Journals, 55 Hayward St., Cambridge, MA 02142 USA; (617)253-2889; [email protected], [email protected]. Published one article at a time in LATEX source form on the Internet. Pag- ination varies from copy to copy. For more information and other articles see: http://www-mitpress.mit.edu/jrnls-catalog/chicago.html • http://www.cs.uchicago.edu/publications/cjtcs/ • ftp://mitpress.mit.edu/pub/CJTCS • ftp://cs.uchicago.edu/pub/publications/cjtcs • Feige and Kilian Limited vs. Polynomial Nondeterminism (Info) The Chicago Journal of Theoretical Computer Science is abstracted or in- R R R dexed in Research Alert, SciSearch, Current Contents /Engineering Com- R puting & Technology, and CompuMath Citation Index. c 1997 The Massachusetts Institute of Technology. Subscribers are licensed to use journal articles in a variety of ways, limited only as required to insure fair attribution to authors and the journal, and to prohibit use in a competing commercial product. See the journal’s World Wide Web site for further details. Address inquiries to the Subsidiary Rights Manager, MIT Press Journals; (617)253-2864; [email protected]. The Chicago Journal of Theoretical Computer Science is a peer-reviewed scholarly journal in theoretical computer science. The journal is committed to providing a forum for significant results on theoretical aspects of all topics in computer science. Editor in chief: Janos Simon Consulting