List Decoding of Direct Sum Codes

Total Page:16

File Type:pdf, Size:1020Kb

List Decoding of Direct Sum Codes List Decoding of Direct Sum Codes Vedat Levi Alev∗ Fernando Granha Jeronimoy Dylan Quintanaz Shashank Srivastava§ Madhur Tulsiani{ Abstract 1 Introduction We consider families of codes obtained by "lifting" a base We consider the problem of list decoding binary codes C code through operations such as k-XOR applied to "local obtained by starting with a binary base code C and am- views" of codewords of C, according to a suitable k-uniform 0 hypergraph. The k-XOR operation yields the direct sum plifying its distance by "lifting" C to a new code C us- encoding used in works of [Ta-Shma, STOC 2017] and [Dinur ing an expanding or pseudorandom structure. Exam- and Kaufman, FOCS 2017]. ples of such constructions include direct products where We give a general framework for list decoding such n 0 k nk one "lifts" (say) C ⊆ F2 to C ⊆ (F2) with each po- lifted codes, as long as the base code admits a unique de- 0 coding algorithm, and the hypergraph used for lifting satis- sition in y 2 C being a k-tuple of bits from k positions fies certain expansion properties. We show that these proper- in z 2 C. Another example is direct sum codes where k ties are indeed satisfied by the collection of length k walks on C0 ⊆ Fn and each position in y is the parity of a k-tuple a sufficiently strong expanding graph, and by hypergraphs 2 corresponding to high-dimensional expanders. Instantiating of bits in z 2 C. Of course, for many applications, it is our framework, we obtain list decoding algorithms for direct interesting to consider a small “pseudorandom” set of sum liftings corresponding to the above hypergraph fami- k-tuples, instead of considering the complete set of size lies. Using known connections between direct sum and direct nk. product, we also recover (and strengthen) the recent results This kind of distance amplification is well known of Dinur et al. [SODA 2019] on list decoding for direct prod- + uct liftings. in coding theory [ABN 92, IW97, GI01, TS17] and it Our framework relies on relaxations given by the Sum- can draw on the vast repertoire of random and pseu- of-Squares (SOS) SDP hierarchy for solving various con- dorandom expanding objects [HLW06, Lub18]. Such straint satisfaction problems (CSPs). We view the problem constructions are also known to have several applica- of recovering the closest codeword to a given (possibly cor- tions to the theory of Probabilitically Checkable Proofs rupted) word, as finding the optimal solution to an instance + of a CSP. Constraints in the instance correspond to edges (PCPs) [IKW09, DS14, DDG 15, Cha16, Aro02]. How- of the lifting hypergraph, and the solutions are restricted to ever, despite having several useful properties, it might lie in the base code C. We show that recent algorithms for not always be clear how to decode the codes resulting (approximately) solving CSPs on certain expanding hyper- from such constructions, especially when constructed graphs by some of the authors also yield a decoding algo- using sparse pseudorandom structures. An impor- rithm for such lifted codes. tant example of this phenomenon is Ta-Shma’s explicit We extend the framework to list decoding, by requiring construction of binary codes of arbitrarily large dis- the SOS solution to minimize a convex proxy for negative tance near the (non-constructive) Gilbert-Varshamov entropy. We show that this ensures a covering property for bound [TS17]. Although the construction is explicit, ef- the SOS solution, and the "condition and round" approach ficient decoding is not known. Going beyond unique- used in several SOS algorithms can then be used to recover decoding algorithms, it is also useful to have efficient the required list of codewords. list-decoding algorithms for complexity-theoretic ap- plications [Sud00, Gur01, STV01, Tre04]. The question of list decoding such pseudoran- ∗Supported by NSERC Discovery Grant 2950-120715, NSERC Ac- dom constructions of direct-product codes was consid- celerator Supplement 2950-120719, and partially supported by NSF ered by Dinur et al. [DHK+19], extending a unique- awards CCF-1254044 and CCF-1718820. University of Waterloo. decoding result of Alon et al. [ABN+92]. While Alon [email protected]. ySupported in part by NSF grants CCF-1254044 and CCF-1816372. et al. proved that the code is unique-decodable when University of Chicago. [email protected]. the lifting hypergraph (collection of k-tuples) is a good zUniversity of Chicago. [email protected] "sampler", Dinur et al. showed that when the hyper- §TTIC. [email protected] graph has additional structure (which they called be- { Supported by NSF grants CCF-1254044 and CCF-1816372. TTIC. ing a "double sampler") then the code is also list de- [email protected] Copyright ⃝c 2020 by SIAM Unauthorized reproduction of this article is prohibited codable. They also posed the question of understand- splitting can easily be related to that of the underlying ing structural properties of the hypergraph that might expander graph. In both cases, we take the function yield even unique decoding algorithms for the direct g to be k-XOR which corresponds to the direct sum sum based liftings. lifting. We also obtain results for direct product codes We develop a generic framework to understand via a simple (and standard) reduction to the direct sum properties of the hypergraphs under which the lifted case. code C0 admits efficient list decoding algorithms, as- Our Results. Now we provide a quantitative version suming only efficient unique decoding algorithms for of our main result. For this, we split the main result the base code C. Formally, let X be a downward-closed into two cases (due to their difference in parameters): hypergraph (simplicial complex) defined by taking the HDXs and length k walks on expander graphs. We start downward closure of a k-uniform hypergraph, and let with the former expanding object. k g : F2 ! F2 be any boolean function. X(i) denotes the collection of sets of size i in X and X(≤ d) the col- THEOREM 1.1. (HDX (INFORMAL)) Let b0 < 1/2 be lection of sets of size at most d. We consider the lift a constant and b 2 (0, b0). Suppose X(≤ d) is a g- 0 g X(1) 0 X(k) HDX on n vertices with g ≤ (log(1/b))−O(log(1/b)) and C = lift ( )(C), where C ⊆ F2 and C ⊆ F2 , and X k = (( ( ))2 2) each bit of y 2 C0 is obtained by applying the func- d W log 1/b /b . C ⊂ Fn ≥ tion g to the corresponding k bits of z 2 C. We study For every linear code 1 2 with relative distance X(k) properties of g and X under which this lifting admits 1/2 − b0, there exists a direct sum lifting Ck ⊂ F2 with W (1) an efficient list decoding algorithm. k = O (log(1/b)) and relative distance ≥ 1/2 − b b0 We consider two properties of this lifting, robust- satisfying the following: ness and tensoriality, which we will be formally defined later. We will show that these properties are sufficient - [Efficient List Decoding] If y˜ is (1/2 − b)-close to Ck, to yield decoding algorithms. The first property (ro- then we can compute the list of all the codewords of Ck −O(1) bustness) essentially requires that for any two words that are (1/2 − b)-close to y˜ in time nb · f (n), X(1) where f (n) is the running time of a unique decoding in F2 at a moderate distance, the lifting amplifies the distance between them. While the second property is algorithm for C1. of a more technical nature and is inspired by the Sum- 1 - [Rate] The rate rk of Ck is rk = r1 · jX(1)j / jX(k)j, of-Squares (SOS) SDP hierarchy used for our decoding where r is the rate of C . algorithms, it is implied by some simpler combinato- 1 1 rial properties. Roughly speaking, this combinatorial A consequence of this result is a method of decod- property requires that the graph on (say) X(k/2) de- ing the direct product lifting on a HDX via a reduction fined by connecting s, t 2 X(k/2) if s \ t = Æ and to the direct sum case. s [ t 2 X(k), is a sufficiently good expander (and sim- ilarly for graphs on X(k/4), X(k/8) and so on). This COROLLARY 1.1. (HDX (INFORMAL)) Let #0 < 1/2 be property requires that the k-tuples can be (recursively) a constant and # > 0. Suppose X(≤ d) is a g-HDX split into disjoint pieces such that at each step the graph on n vertices with g ≤ (log(1/#))−O(log(1/#)) and d = obtained between the pairs of pieces is a good ex- W((log(1/#))2/#2). n pander. We refer to this property as splittability. For every linear code C1 ⊂ F2 with relative distance Expanding Structures. We instantiate the above ≥ 1/2 − #0, there exists a direct product encoding C` ⊂ framework with two specific structures: the collection ` X(`) (F2) with ` = O(log(1/#)) that can be efficiently list of k-sized hyperedges of a high-dimensional expander decoded up to distance 1 − #. (HDX) and the collection of length k walks of an ex- pander graph. HDXs are downward-closed hyper- REMARK 1.1. List decoding the direct product lifting was graphs satisfying certain expansion properties. We will first established by Dinur et al. in[DHK+19] using an quantify this expansion using Dinur and Kaufman’s expanding object introduced by them, namely, double sam- notion of a g-HDX [DK17].
Recommended publications
  • Zig-Zag Product
    High Dimensional Expanders Lecture 12: Agreement Tests Lecture by: Irit Dinur Scribe: Irit Dinur In this lecture we describe agreement tests, which are a generalization of direct product tests and low degree tests, both of which play a role in PCP constructions. It turns out that the underlying structures that support agreement tests are invariably some kind of high dimensional expander, as we shall see. 1 General Setup It is a basic fact of computation that any global computation can be broken down into a sequence of local steps. The PCP theorem [AS98, ALM+98] says that moreover, this can be done in a robust fashion, so that as long as most steps are correct, the entire computation checks out. At the heart of this is a local-to-global argument that allows deducing a global property from local pieces that fit together only approximately. In an agreement test, a function is given by an ensemble of local restrictions. The agreement test checks that the restrictions agree when they overlap, and the main question is whether typical agreement of the local pieces implies that there exists a global function that agrees with most local restrictions. Let us describe the basic framework, consisting of a quadruple (V; X; fFSgS2X ; D). • Ground set: Let V be a set of n points (or vertices). • Collection of small subsets: Let X be a collection of subsets S ⊂ V , typically for each S 2 X we have jSj n. • Local functions: for each subset S 2 X, there is a space FS ⊂ ff : S ! Σg of functions on S.
    [Show full text]
  • Calibrating Noise to Sensitivity in Private Data Analysis
    Calibrating Noise to Sensitivity in Private Data Analysis Cynthia Dwork1, Frank McSherry1, Kobbi Nissim2, and Adam Smith3? 1 Microsoft Research, Silicon Valley. {dwork,mcsherry}@microsoft.com 2 Ben-Gurion University. [email protected] 3 Weizmann Institute of Science. [email protected] Abstract. We continue a line of research initiated in [10, 11] on privacy- preserving statistical databases. Consider a trusted server that holds a database of sensitive information. Given a query function f mapping databases to reals, the so-called true answer is the result of applying f to the database. To protect privacy, the true answer is perturbed by the addition of random noise generated according to a carefully chosen distribution, and this response, the true answer plus noise, is returned to the user. Previous work focused on the case of noisy sums, in which f = P i g(xi), where xi denotes the ith row of the database and g maps database rows to [0, 1]. We extend the study to general functions f, proving that privacy can be preserved by calibrating the standard devi- ation of the noise according to the sensitivity of the function f. Roughly speaking, this is the amount that any single argument to f can change its output. The new analysis shows that for several particular applications substantially less noise is needed than was previously understood to be the case. The first step is a very clean characterization of privacy in terms of indistinguishability of transcripts. Additionally, we obtain separation re- sults showing the increased value of interactive sanitization mechanisms over non-interactive.
    [Show full text]
  • A New Point of NP-Hardness for Unique Games
    A new point of NP-hardness for Unique Games Ryan O'Donnell∗ John Wrighty September 30, 2012 Abstract 1 3 We show that distinguishing 2 -satisfiable Unique-Games instances from ( 8 + )-satisfiable instances is NP-hard (for all > 0). A consequence is that we match or improve the best known c vs. s NP-hardness result for Unique-Games for all values of c (except for c very close to 0). For these c, ours is the first hardness result showing that it helps to take the alphabet size larger than 2. Our NP-hardness reductions are quasilinear-size and thus show nearly full exponential time is required, assuming the ETH. ∗Department of Computer Science, Carnegie Mellon University. Supported by NSF grants CCF-0747250 and CCF-0915893, and by a Sloan fellowship. Some of this research performed while the author was a von Neumann Fellow at the Institute for Advanced Study, supported by NSF grants DMS-083537 and CCF-0832797. yDepartment of Computer Science, Carnegie Mellon University. 1 Introduction Thanks largely to the groundbreaking work of H˚astad[H˚as01], we have optimal NP-hardness of approximation results for several constraint satisfaction problems (CSPs), including 3Lin(Z2) and 3Sat. But for many others | including most interesting CSPs with 2-variable constraints | we lack matching algorithmic and NP-hardness results. Take the 2Lin(Z2) problem for example, in which there are Boolean variables with constraints of the form \xi = xj" and \xi 6= xj". The largest approximation ratio known to be achievable in polynomial time is roughly :878 [GW95], whereas it 11 is only known that achieving ratios above 12 ≈ :917 is NP-hard [H˚as01, TSSW00].
    [Show full text]
  • Call for Papers
    Call For Papers The 6th Innovations in Theoretical Computer Science (ITCS) conference, sponsored by the ACM Special Interest Group on Algorithms and Computation Theory (SIGACT), will be held at the Weizmann Institute of Science, Israel, January 11-13, 2015. ITCS (previously known as ICS) seeks to promote research that carries a strong conceptual message (e.g., introducing a new concept or model, opening a new line of inquiry within traditional or cross-interdisciplinary areas, or introducing new techniques or new applications of known techniques). ITCS welcomes all submissions, whether aligned with current theory of computation research directions or deviating from them. Important Dates Paper Submission Deadline: Friday, August 8, 2014, 5PM PDT Notification to Authors: Monday, October 20, 2014 Camera ready papers due: Monday, November 24, 2014 Conference dates: Sunday-Tuesday, January 11-13, 2015 Program Committee: Benny Applebaum (Tel Aviv University) Avrim Blum (Carnegie Mellon University) Costis Daskalakis (MIT) Uriel Feige (Weizmann Institute) Vitaly Feldman (IBM Research - Almaden) Parikshit Gopalan (Microsoft Research) Bernhard Haeupler (Carnegie Mellon University and Microsoft Research) Stefano Leonardi (University of Rome La Sapienza) Tal Malkin (Columbia University) Nicole Megow (Technische Universitat Berlin) Michael Mitzenmacher (Harvard University) Noam Nisan (Hebrew University and Microsoft Research) Ryan O'Donnell (Carnegie Mellon University) Rafael Pass (Cornell and Cornell NYC Tech) Dana Ron (Tel Aviv University) Guy Rothblum
    [Show full text]
  • Exponentially-Hard Gap-CSP and Local PRG Via Local Hardcore Functions
    Electronic Colloquium on Computational Complexity, Report No. 63 (2017) Exponentially-Hard gap-CSP and local PRG via Local Hardcore Functions Benny Applebaum∗ April 10, 2017 Abstract The gap-ETH assumption (Dinur 2016; Manurangsi and Raghavendra 2016) asserts that it is exponentially-hard to distinguish between a satisfiable 3-CNF formula and a 3-CNF for- mula which is at most 0.99-satisfiable. We show that this assumption follows from the ex- ponential hardness of finding a satisfying assignment for smooth 3-CNFs. Here smoothness means that the number of satisfying assignments is not much smaller than the number of “almost-satisfying” assignments. We further show that the latter (“smooth-ETH”) assump- tion follows from the exponential hardness of solving constraint satisfaction problems over well-studied distributions, and, more generally, from the existence of any exponentially-hard locally-computable one-way function. This confirms a conjecture of Dinur (ECCC 2016). We also prove an analogous result in the cryptographic setting. Namely, we show that the existence of exponentially-hard locally-computable pseudorandom generator with linear stretch (el-PRG) follows from the existence of an exponentially-hard locally-computable “al- most regular” one-way functions. None of the above assumptions (gap-ETH and el-PRG) was previously known to follow from the hardness of a search problem. Our results are based on a new construction of gen- eral (GL-type) hardcore functions that, for any exponentially-hard one-way function, output linearly many hardcore bits, can be locally computed, and consume only a linear amount of random bits. We also show that such hardcore functions have several other useful applications in cryptography and complexity theory.
    [Show full text]
  • Revisiting Alphabet Reduction in Dinur's
    Revisiting Alphabet Reduction in Dinur’s PCP Venkatesan Guruswami Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, USA [email protected] Jakub Opršal Computer Science Department, Durham University, UK [email protected] Sai Sandeep Computer Science Department, Carnegie Mellon University, Pittsburgh, PA, USA [email protected] Abstract Dinur’s celebrated proof of the PCP theorem alternates two main steps in several iterations: gap amplification to increase the soundness gap by a large constant factor (at the expense of much larger alphabet size), and a composition step that brings back the alphabet size to an absolute constant (at the expense of a fixed constant factor loss in the soundness gap). We note that the gap amplification can produce a Label Cover CSP. This allows us to reduce the alphabet size via a direct long-code based reduction from Label Cover to a Boolean CSP. Our composition step thus bypasses the concept of Assignment Testers from Dinur’s proof, and we believe it is more intuitive – it is just a gadget reduction. The analysis also uses only elementary facts (Parseval’s identity) about Fourier Transforms over the hypercube. 2012 ACM Subject Classification Theory of computation → Problems, reductions and completeness; Theory of computation → Approximation algorithms analysis Keywords and phrases PCP theorem, CSP, discrete Fourier analysis, label cover, long code Digital Object Identifier 10.4230/LIPIcs.APPROX/RANDOM.2020.34 Category APPROX Funding Venkatesan Guruswami: Supported in part by NSF grants CCF-1526092, CCF-1814603, and CCF-1908125. Jakub Opršal: Received funding from the UK EPSRC grant EP/R034516/1.
    [Show full text]
  • László Lovász Avi Wigderson of Eötvös Loránd University of the Institute for Advanced Study, in Budapest, Hungary and Princeton, USA
    2021 The Norwegian Academy of Science and Letters has decided to award the Abel Prize for 2021 to László Lovász Avi Wigderson of Eötvös Loránd University of the Institute for Advanced Study, in Budapest, Hungary and Princeton, USA, “for their foundational contributions to theoretical computer science and discrete mathematics, and their leading role in shaping them into central fields of modern mathematics.” Theoretical Computer Science (TCS) is the study of computational lens”. Discrete structures such as the power and limitations of computing. Its roots go graphs, strings, permutations are central to TCS, back to the foundational works of Kurt Gödel, Alonzo and naturally discrete mathematics and TCS Church, Alan Turing, and John von Neumann, leading have been closely allied fields. While both these to the development of real physical computers. fields have benefited immensely from more TCS contains two complementary sub-disciplines: traditional areas of mathematics, there has been algorithm design which develops efficient methods growing influence in the reverse direction as well. for a multitude of computational problems; and Applications, concepts, and techniques from TCS computational complexity, which shows inherent have motivated new challenges, opened new limitations on the efficiency of algorithms. The notion directions of research, and solved important open of polynomial-time algorithms put forward in the problems in pure and applied mathematics. 1960s by Alan Cobham, Jack Edmonds, and others, and the famous P≠NP conjecture of Stephen Cook, László Lovász and Avi Wigderson have been leading Leonid Levin, and Richard Karp had strong impact on forces in these developments over the last decades. the field and on the work of Lovász and Wigderson.
    [Show full text]
  • Lecture 9: Dinur's Proof of the PCP Theorem 9.1 Hardness Of
    CS369E: Expanders May 31, 2005 Lecture 9: Dinur’s Proof of the PCP Theorem Lecturer: Prahladh Harsha Scribe: Krishnaram Kenthapadi In this lecture, we will describe a recent and remarkable proof of the PCP theorem, due to Irit Dinur [Din]. This proof is a beautiful application of expanders for “soundness amplification” of randomized algorithms without using too many random bits. Before describing the proof, we will first look at the PCP theorem, by relating it with the theory of NP-completeness. 9.1 Hardness of Optimization Problems The theory of NP-completeness, as developed by Cook, Levin, and Karp, states that any language, L in NP is reducible to the Boolean satisfiability problem, 3SAT. By this, we mean that for every instance, x of the language L, we can obtain a satisfiability instance, φ such that x ∈ L if and only if φ is satisfiable. Thus, 3SAT is at least as hard as any other problem in NP. Karp further showed that 3SAT can be reduced to other problems such as CLIQUE and 3-COLORABILITY and hence that these problems are at least as hard as any problem in NP. In other words, solving these problems optimally is as hard as solving any other problem in NP optimally. However the question of the hardness of approximation was left open. For instance, can the following be true – finding a satisfying assignment for 3SAT is NP-hard, however it is easy to find an assignment that satisfies 99% of the clauses. Questions such as Other examples: can we approximate the clique size in a graph? Or, can we obtain a 3-coloring that satisfies 99% of the edge constraints? In other words, is the approximation version of some of NP-hard problems easier than the optimization versions.
    [Show full text]
  • The Gödel Prize 2020 - Call for Nominatonn
    The Gödel Prize 2020 - Call for Nominatonn Deadline: February 15, 2020 The Gödel Prize for outntanding papern in the area of theoretial iomputer niienie in nponnored jointly by the European Annoiiaton for Theoretial Computer Siienie (EATCS) and the Annoiiaton for Computng Maihinery, Speiial Innterent Group on Algorithmn and Computaton Theory (AC M SInGACT) The award in prenented annually, with the prenentaton taaing plaie alternately at the Innternatonal Colloquium on Automata, Languagen, and Programming (InCALP) and the AC M Symponium on Theory of Computng (STOC) The 28th Gödel Prize will be awarded at the 47th Innternatonal Colloquium on Automata, Languagen, and Programming to be held during 8-12 July, 2020 in Beijing The Prize in named in honour of Kurt Gödel in reiogniton of hin major iontributonn to mathematial logii and of hin interent, diniovered in a leter he wrote to John von Neumann nhortly before von Neumann’n death, in what han beiome the famoun “P vernun NP” quenton The Prize iniluden an award of USD 5,000 Award Committee: The 2020 Award Commitee ionnintn of Samnon Abramnay (Univernity of Oxford), Anuj Dawar (Chair, Univernity of Cambridge), Joan Feigenbaum (Yale Univernity), Robert Krauthgamer (Weizmann Innnttute), Daniel Spielman (Yale Univernity) and David Zuiaerman (Univernity of Texan, Auntn) Eligibility: The 2020 Prize rulen are given below and they nupernede any diferent interpretaton of the generii rule to be found on webniten of both SInGACT and EATCS Any renearih paper or nerien of papern by a ningle author or by
    [Show full text]
  • On Dinur's Proof of the PCP Theorem by Jaikumar
    BULLETIN (New Series) OF THE AMERICAN MATHEMATICAL SOCIETY Volume 44, Number 1, January 2007, Pages 19–61 S 0273-0979(06)01143-8 Article electronically published on September 26, 2006 ON DINUR’S PROOF OF THE PCP THEOREM JAIKUMAR RADHAKRISHNAN AND MADHU SUDAN Abstract. Probabilistically checkable proofs are proofs that can be checked probabilistically by reading very few bits of the proof. In the early 1990s, it was shown that proofs could be transformed into probabilistically checkable ones with only a modest increase in their length. The initial transformations, though elementary, were a little too complex. A recent work due to Irit Dinur gives a dramatically simple (and radically new) construction of probabilisti- cally checkable proofs. This article explains the notion of a probabilistically checkable proof, presents the formal definition and then introduces the reader to Dinur’s work along with some of the context. 1. Introduction As advances in mathematics continue at the current rate, editors of mathematical journals increasingly face the challenge of reviewing long, and often wrong, “proofs” of classical conjectures. Often, even when it is a good guess that a given submission is erroneous, it takes excessive amounts of effort on the editor’s/reviewer’s part to find a specific error one can point to. Most reviewers assume this is an inevitable consequence of the notion of verifying submissions and expect the complexity of the verification procedure to grow with the length of the submission. One of the main aims of this article is to point out that this is actually not the case: There does exist a format in which we can ask for proofs of theorems to be written.
    [Show full text]
  • Hardness of Approximation, NP ⊆ PCP[Polylog,Polylog]
    CS221: Computational Complexity Prof. Salil Vadhan Lecture 34: More Hardness of Approximation, NP ⊆ PCP[polylog; polylog] 12/16 Scribe: Vitaly Feldman Contents 1 More Tight Hardness of Approximation Results 1 2 Open Problems 2 3 Sketch of the Proof of the PCP Theorem 2 Additional Readings. Papadimitriou, Ch. 13 has a detailed treatment of approximability, with less emphasis on the PCP theorem (which is stated in different language in Thm 13.12). 1 More Tight Hardness of Approximation Results Last time we saw the tight results for hardness of approximation of Max3SAT and E3LIN2. Today we will survey some other tight hardness of approximation results. Definition 1 MaxClique — given a graph G = (V; E) find the largest clique. This problem is completely equivalent to finding the maximum independent set. Best known approximation algorithm gives n -approximation (which is not much better than the trivial log2 n n-approximation). Theorem 2 (H˚astad) It is NP-hard to approximate MaxClique within n1¡² factor for any positive ². The result is proven by tailoring the PCP Theorem for this specific problem (namely, optimizing the so-called “amortized free bit complexity” of the PCP). Definition 3 Set Cover problem: given a collection of sets S1;:::;Sm which are subsets of the set f1; : : : ; ng, find the smallest subcollection of sets Si1 ;:::;Sit whose union is equal to f1; : : : ; ng (that is, they cover the “universe”). The greedy algorithm gives H(n) ¼ ln n approximation to this problem (H(n) is the n-th harmonic number). 1 Theorem 4 (Feige) It is NP-hard to approximate Set Cover to within (1 ¡ ²) ln n factor for any positive ².
    [Show full text]
  • Probabilistically Checkable Proofs: a Primer
    Probabilistically Checkable Proofs: A Primer Madhu Sudan∗ July 11, 2006 Abstract Probabilistically checkable proofs are proofs that can checked probabilistically by reading very few bits of the proof. Roughly ten years back it was shown that proofs could be made probabilistically checkable with a modest increase in their size. While the initial proofs were a little too complex, a recent proof due to Irit Dinur gives a dramatically simple (and radically new) construction of probabilistically checkable proofs. This article explains the notion, presents the formal definition and then introduces the reader to Dinur’s work and explains some of the context (but does not reproduce Dinur’s proof). 1 Introduction As advances in mathematics continue at the current rate, editors of mathematical journals in- creasingly face the challenge of reviewing increasingly long, and often wrong, “proofs” of classical conjectures. Often, even when it is a good guess that a given submission is erroneous, it takes excessive amounts of effort on the editor/reviewer’s part to find a specific error one can point to. Most reviewers assume this is an inevitable consequence of the notion of verifying submissions; and expect the complexity of the verification procedure to grow with the length of the submission. The purpose of this article is to point out that this is actually not the case: There does exist a format in which we can ask for proofs of theorems to be written. This format allows for perfectly valid proofs of correct theorems, while any purported proof of an incorrect assertion will be “evidently wrong” (in a manner to be clarified below).
    [Show full text]