The Unique Games Conjecture, Integrality Gap for Cut Problems and Embeddability of Negative Type Metrics Into 1 [Extended Abstract]

Total Page:16

File Type:pdf, Size:1020Kb

The Unique Games Conjecture, Integrality Gap for Cut Problems and Embeddability of Negative Type Metrics Into �1 [Extended Abstract] The Unique Games Conjecture, Integrality Gap for Cut Problems and Embeddability of Negative Type Metrics into 1 [Extended Abstract] Subhash A. Khot Nisheeth K. Vishnoi College of Computing, Georgia Tech IBM India Research Lab Atlanta GA 30332 Block-1 IIT Delhi, New Delhi 110016 [email protected] [email protected] Abstract In this paper we disprove the following conjecture due to Goemans [16] and Linial [24] (also see [5, 26]): “Every 1. Introduction negative type metric embeds into 1 with constant distor- tion.” We show that for every δ>0, and for large enough In recent years, the theory of metric embeddings has n, there is an n-point negative type metric which requires 1/6−δ played an increasing role in algorithm design. Best approx- distortion at-least (log log n) to embed into 1. imation algorithms for several NP-hard problems rely on Surprisingly, our construction is inspired by the Unique techniques (and theorems) used to embed one metric space Games Conjecture (UGC) of Khot [19], establishing a pre- into another with low distortion. viously unsuspected connection between PCPs and the the- Bourgain [7] showed that every n-point metric embeds ory of metric embeddings. We first prove that the UGC into 1 (in fact into 2) with distortion O(log n). Indepen- implies super-constant hardness results for (non-uniform) dently, Aumann and Rabani [6] and Linial, London and Ra- SPARSEST CUT and MINIMUM UNCUT problems. It is al- binovich [25] gave a striking application of Bourgain’s The- ready known that the UGC also implies an optimal hard- orem: An O(log n) approximation algorithm for SPARS- ness result for MAXIMUM CUT [20]. EST CUT. The approximation ratio is exactly the distor- Though these hardness results depend on the UGC,the tion incurred in Bourgain’s Theorem. This gave an alter- integrality gap instances rely “only” on the PCP reductions nate approach to the seminal work of Leighton and Rao for the respective problems. Towards this, we first construct [23], who obtained an O(log n) approximation algorithm an integrality gap instance for a natural SDP relaxation for SPARSEST CUT via a LP-relaxation based on muticom- of UNIQUE GAMES. Then, we “simulate” the PCP reduc- modity flows. It is well-known that an f(n) factor algo- tion and “translate” the integrality gap instance of UNIQUE rithm for SPARSEST CUT can be used iteratively to de- GAMES to integrality gap instances for the respective cut sign an O(f(n)) factor algorithm for BALANCED SEPA- problems! This enables us to prove a (log log n)1/6−δ in- 1 1 RATOR: Given a graph that has a ( 2 , 2 )-partition cutting an tegrality gap for (non-uniform) SPARSEST CUT and MIN- 1 2 α fraction of the edges, the algorithm produces a ( 3 , 3 )- IMUM UNCUT, and an optimal integrality gap for MAX- partition that cuts at-most O(f(n)α) fraction of the edges. IMUM CUT. All our SDP solutions satisfy the so-called Such partitioning algorithms are very useful as sub-routines “triangle inequality” constraints. This also shows, for the in designing graph theoretic algorithms via the divide-and- first time, that the triangle inequality constraints do not add conquer paradigm. any power to the Goemans-Williamson’s SDP relaxation of The results of [6, 25] are based on the metric LP relax- MAXIMUM CUT. ation of SPARSEST CUT. Given an instance G(V,E) of The integrality gap for SPARSEST CUT immediately im- SPARSEST CUT,letdG be the n-point metric obtained as a plies a lower bound for embedding negative type metrics solution to this LP. The metric dG is then embedded into 1 into 1. It also disproves the non-uniform version of Arora, via Bourgain’s Theorem. Since 1 metrics are non-negative Rao and Vazirani’s Conjecture [5], asserting that the in- linear combinations of cut metrics, an embedding into 1 tegrality gap of the SPARSEST CUT SDP, with the triangle essentially gives the desired sparse cut (up to an O(log n) inequality constraints, is bounded from above by a constant. approximation factor). Subsequent to this result, it was real- ized that one could write an SDP relaxation of SPARSEST Conjecture. The main result in this paper is a disproval of 2 CUT, and enforce an additional condition, that the metric the (2,1,O(1))-Conjecture and a disproval of the non- dG belong to a special class of metrics, called the negative uniform version of the ARV-Conjecture, see Conjecture 2 2 2 type metrics (denoted by 2). Clearly, if 2 embeds into 1 A.15. The disprovals follow from the construction of a with distortion g(n), then one would get a g(n) approxima- super-constant integrality gap for the non-uniform version tion to SPARSEST CUT.1 of BALANCED SEPARATOR (which implies the same gap 2 The results of [6, 25] led to the conjecture that 2 em- for the non-uniform version of SPARSEST CUT). We also beds into 1 with distortion Cneg, for some absolute con- obtain integrality gap instances for MAXIMUM CUT and stant Cneg. This conjecture has been attributed to Goemans MINIMUM UNCUT. In the following sections, we describe 2 [16] and Linial [24], see [5, 26]. This conjecture, which we our results in detail and present an overview of our 2 versus 2 will henceforth refer to as the (2,1,O(1))-Conjecture, if 1 lower bound. Due to the lack of space, the only techni- true, would have had tremendous algorithmic applications cal details we are able to provide in this version of the pa- (apart from being an important mathematical result). Sev- per are: The basic setup, which relates cuts and metrics, in eral problems, specifically cut problems (see [11]), can be Appendix A, and the overall approach for the disproval of 2 formulated as optimization problems over the class of 1 (2,1,O(1))-Conjecture in Appendix B. The full version metrics, and optimization over 1 is an NP-hard problem of this paper will have all the details. 2 in general. However, one can optimize over 2 metrics in ⊆ 2 2 polynomial time via SDPs (and 1 2). Hence, if 2 was 2. Our Results embeddable into 1 with constant distortion, one would get a computationally efficient approximation to 1 metrics. 2 2 2.1. The Disproval of (2,1,O(1))-Conjecture However, no better embedding of 2 into 1, other than Bourgain’s O(log n) embedding (that works for all met- We prove the following theorem which follows from the rics), was known until recently. A breakthrough result of √ integrality gap construction for non-uniform BALANCED Arora, Rao and Vazirani (ARV) [5] gave an O( log n) ap- SEPARATOR. See Section A for definitions and basic facts. proximation to (uniform) SPARSEST CUT by showing√ that the integrality gap of the SDP relaxation is O( log n) Theorem 2.1 For every δ>0 and for all sufficiently large (see also [28] for an alternate perspective on ARV). Sub- 2 n, there is an n-point 2 metric which cannot be embedded sequently, ARV techniques were used by Chawla, Gupta 1/6−δ into 1 with distortion less than (log log n) . and Racke¨ [9] to give an O(log3/4 n) distortion embed- 2 ding of 2 metrics into 2, and√ hence, into 1. This re- Remark 2.2 One of the crucial ingredients for obtaining sult was further improved to O( log n log log n) by Arora, the lower bound of (log log n)1/6−δ in Theorems 2.1 and Lee, and Naor [3]. The latter paper implies, in particu- 2.3 is Bourgain’s Junta Theorem [8]. A recent improvement lar, that every√ n-point 1 metric embeds into 2 with dis- of this theorem due to Mossel et al. [27] improves both of tortion√ O( log n log log n), almost matching decades old our lower bounds to (log log n)1/4−δ. Ω( log n) lower bound due to Enflo [12]. Techniques√ from ARV have also been applied, to obtain O( log n) approx- 2.2. Integrality Gap Instances for Cut Problems imation to MINIMUM UNCUT and related problems [1], to VERTEX SEPARATOR [13], and to obtain a 2 − O( √ 1 ) log n SPARSEST CUT and BALANCED SEPARATOR (non- approximation to VERTEX COVER [18]. It was conjec- uniform versions), as well as MAXIMUM CUT and MINI- tured in the ARV paper, that the integrality gap of the SDP MUM UNCUT are defined in Section A.4. Natural SDP re- relaxation of SPARSEST CUT is bounded from above by laxations for these problems are also described there. All an absolute constant (they make this conjecture only for the SDPs include the so-called triangle inequality con- 2 the uniform version, and the (2,1,O(1))-Conjecture im- straints: For every triple of vectors u, v, w in the SDP so- plies it also for the non-uniform version). Thus, if the lution, u − v2 + v − w2 ≥u − w2. Note that 2 (2,1,O(1))-Conjecture and/or the ARV-Conjecture were these constraints are always satisfied by the integral solu- true, one would potentially get a constant factor approxi- tions, i.e., +1, −1 valued solutions. We prove the following mation to a host of problems, and perhaps, an algorithm two theorems: for VERTEX COVER with an approximation factor better than 2! Clearly, it is an important open problem to prove Theorem 2.3 SPARSEST CUT, BALANCED SEPARATOR 2 or disprove the (2,1,O(1))-Conjecture and/or the ARV- (non-uniform versions of both) and MINIMUM UNCUT have an integrality gap of at-least (log log n)1/6−δ, where 1Algorithms based on metric embeddings (typically) work for the non- uniform version of SPARSEST CUT, which is more general.
Recommended publications
  • The Strongish Planted Clique Hypothesis and Its Consequences
    The Strongish Planted Clique Hypothesis and Its Consequences Pasin Manurangsi Google Research, Mountain View, CA, USA [email protected] Aviad Rubinstein Stanford University, CA, USA [email protected] Tselil Schramm Stanford University, CA, USA [email protected] Abstract We formulate a new hardness assumption, the Strongish Planted Clique Hypothesis (SPCH), which postulates that any algorithm for planted clique must run in time nΩ(log n) (so that the state-of-the-art running time of nO(log n) is optimal up to a constant in the exponent). We provide two sets of applications of the new hypothesis. First, we show that SPCH implies (nearly) tight inapproximability results for the following well-studied problems in terms of the parameter k: Densest k-Subgraph, Smallest k-Edge Subgraph, Densest k-Subhypergraph, Steiner k-Forest, and Directed Steiner Network with k terminal pairs. For example, we show, under SPCH, that no polynomial time algorithm achieves o(k)-approximation for Densest k-Subgraph. This inapproximability ratio improves upon the previous best ko(1) factor from (Chalermsook et al., FOCS 2017). Furthermore, our lower bounds hold even against fixed-parameter tractable algorithms with parameter k. Our second application focuses on the complexity of graph pattern detection. For both induced and non-induced graph pattern detection, we prove hardness results under SPCH, improving the running time lower bounds obtained by (Dalirrooyfard et al., STOC 2019) under the Exponential Time Hypothesis. 2012 ACM Subject Classification Theory of computation → Problems, reductions and completeness; Theory of computation → Fixed parameter tractability Keywords and phrases Planted Clique, Densest k-Subgraph, Hardness of Approximation Digital Object Identifier 10.4230/LIPIcs.ITCS.2021.10 Related Version A full version of the paper is available at https://arxiv.org/abs/2011.05555.
    [Show full text]
  • Lecture 20 — March 20, 2013 1 the Maximum Cut Problem 2 a Spectral
    UBC CPSC 536N: Sparse Approximations Winter 2013 Lecture 20 | March 20, 2013 Prof. Nick Harvey Scribe: Alexandre Fr´echette We study the Maximum Cut Problem with approximation in mind, and naturally we provide a spectral graph theoretic approach. 1 The Maximum Cut Problem Definition 1.1. Given a (undirected) graph G = (V; E), the maximum cut δ(U) for U ⊆ V is the cut with maximal value jδ(U)j. The Maximum Cut Problem consists of finding a maximal cut. We let MaxCut(G) = maxfjδ(U)j : U ⊆ V g be the value of the maximum cut in G, and 0 MaxCut(G) MaxCut = jEj be the normalized version (note that both normalized and unnormalized maximum cut values are induced by the same set of nodes | so we can interchange them in our search for an actual maximum cut). The Maximum Cut Problem is NP-hard. For a long time, the randomized algorithm 1 1 consisting of uniformly choosing a cut was state-of-the-art with its 2 -approximation factor The simplicity of the algorithm and our inability to find a better solution were unsettling. Linear pro- gramming unfortunately didn't seem to help. However, a (novel) technique called semi-definite programming provided a 0:878-approximation algorithm [Goemans-Williamson '94], which is optimal assuming the Unique Games Conjecture. 2 A Spectral Approximation Algorithm Today, we study a result of [Trevison '09] giving a spectral approach to the Maximum Cut Problem with a 0:531-approximation factor. We will present a slightly simplified result with 0:5292-approximation factor.
    [Show full text]
  • Tight Integrality Gaps for Lovasz-Schrijver LP Relaxations of Vertex Cover and Max Cut
    Tight Integrality Gaps for Lovasz-Schrijver LP Relaxations of Vertex Cover and Max Cut Grant Schoenebeck∗ Luca Trevisany Madhur Tulsianiz Abstract We study linear programming relaxations of Vertex Cover and Max Cut arising from repeated applications of the \lift-and-project" method of Lovasz and Schrijver starting from the standard linear programming relaxation. For Vertex Cover, Arora, Bollobas, Lovasz and Tourlakis prove that the integrality gap remains at least 2 − " after Ω"(log n) rounds, where n is the number of vertices, and Tourlakis proves that integrality gap remains at least 1:5 − " after Ω((log n)2) rounds. Fernandez de la 1 Vega and Kenyon prove that the integrality gap of Max Cut is at most 2 + " after any constant number of rounds. (Their result also applies to the more powerful Sherali-Adams method.) We prove that the integrality gap of Vertex Cover remains at least 2 − " after Ω"(n) rounds, and that the integrality gap of Max Cut remains at most 1=2 + " after Ω"(n) rounds. 1 Introduction Lovasz and Schrijver [LS91] describe a method, referred to as LS, to tighten a linear programming relaxation of a 0/1 integer program. The method adds auxiliary variables and valid inequalities, and it can be applied several times sequentially, yielding a sequence (a \hierarchy") of tighter and tighter relaxations. The method is interesting because it produces relaxations that are both tightly constrained and efficiently solvable. For a linear programming relaxation K, denote by N(K) the relaxation obtained by the application of the LS method, and by N k(K) the relaxation obtained by applying the LS method k times.
    [Show full text]
  • Optimal Inapproximability Results for MAX-CUT and Other 2-Variable Csps?
    Electronic Colloquium on Computational Complexity, Report No. 101 (2005) Optimal Inapproximability Results for MAX-CUT and Other 2-Variable CSPs? Subhash Khot∗ Guy Kindlery Elchanan Mosselz College of Computing Theory Group Department of Statistics Georgia Tech Microsoft Research U.C. Berkeley [email protected] [email protected] [email protected] Ryan O'Donnell∗ Theory Group Microsoft Research [email protected] September 19, 2005 Abstract In this paper we show a reduction from the Unique Games problem to the problem of approximating MAX-CUT to within a factor of αGW + , for all > 0; here αGW :878567 denotes the approxima- tion ratio achieved by the Goemans-Williamson algorithm [25]. This≈implies that if the Unique Games Conjecture of Khot [36] holds then the Goemans-Williamson approximation algorithm is optimal. Our result indicates that the geometric nature of the Goemans-Williamson algorithm might be intrinsic to the MAX-CUT problem. Our reduction relies on a theorem we call Majority Is Stablest. This was introduced as a conjecture in the original version of this paper, and was subsequently confirmed in [42]. A stronger version of this conjecture called Plurality Is Stablest is still open, although [42] contains a proof of an asymptotic version of it. Our techniques extend to several other two-variable constraint satisfaction problems. In particular, subject to the Unique Games Conjecture, we show tight or nearly tight hardness results for MAX-2SAT, MAX-q-CUT, and MAX-2LIN(q). For MAX-2SAT we show approximation hardness up to a factor of roughly :943. This nearly matches the :940 approximation algorithm of Lewin, Livnat, and Zwick [40].
    [Show full text]
  • Approximating Cumulative Pebbling Cost Is Unique Games Hard
    Approximating Cumulative Pebbling Cost Is Unique Games Hard Jeremiah Blocki Department of Computer Science, Purdue University, West Lafayette, IN, USA https://www.cs.purdue.edu/homes/jblocki [email protected] Seunghoon Lee Department of Computer Science, Purdue University, West Lafayette, IN, USA https://www.cs.purdue.edu/homes/lee2856 [email protected] Samson Zhou School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA https://samsonzhou.github.io/ [email protected] Abstract P The cumulative pebbling complexity of a directed acyclic graph G is defined as cc(G) = minP i |Pi|, where the minimum is taken over all legal (parallel) black pebblings of G and |Pi| denotes the number of pebbles on the graph during round i. Intuitively, cc(G) captures the amortized Space-Time complexity of pebbling m copies of G in parallel. The cumulative pebbling complexity of a graph G is of particular interest in the field of cryptography as cc(G) is tightly related to the amortized Area-Time complexity of the Data-Independent Memory-Hard Function (iMHF) fG,H [7] defined using a constant indegree directed acyclic graph (DAG) G and a random oracle H(·). A secure iMHF should have amortized Space-Time complexity as high as possible, e.g., to deter brute-force password attacker who wants to find x such that fG,H (x) = h. Thus, to analyze the (in)security of a candidate iMHF fG,H , it is crucial to estimate the value cc(G) but currently, upper and lower bounds for leading iMHF candidates differ by several orders of magnitude. Blocki and Zhou recently showed that it is NP-Hard to compute cc(G), but their techniques do not even rule out an efficient (1 + ε)-approximation algorithm for any constant ε > 0.
    [Show full text]
  • A New Point of NP-Hardness for Unique Games
    A new point of NP-hardness for Unique Games Ryan O'Donnell∗ John Wrighty September 30, 2012 Abstract 1 3 We show that distinguishing 2 -satisfiable Unique-Games instances from ( 8 + )-satisfiable instances is NP-hard (for all > 0). A consequence is that we match or improve the best known c vs. s NP-hardness result for Unique-Games for all values of c (except for c very close to 0). For these c, ours is the first hardness result showing that it helps to take the alphabet size larger than 2. Our NP-hardness reductions are quasilinear-size and thus show nearly full exponential time is required, assuming the ETH. ∗Department of Computer Science, Carnegie Mellon University. Supported by NSF grants CCF-0747250 and CCF-0915893, and by a Sloan fellowship. Some of this research performed while the author was a von Neumann Fellow at the Institute for Advanced Study, supported by NSF grants DMS-083537 and CCF-0832797. yDepartment of Computer Science, Carnegie Mellon University. 1 Introduction Thanks largely to the groundbreaking work of H˚astad[H˚as01], we have optimal NP-hardness of approximation results for several constraint satisfaction problems (CSPs), including 3Lin(Z2) and 3Sat. But for many others | including most interesting CSPs with 2-variable constraints | we lack matching algorithmic and NP-hardness results. Take the 2Lin(Z2) problem for example, in which there are Boolean variables with constraints of the form \xi = xj" and \xi 6= xj". The largest approximation ratio known to be achievable in polynomial time is roughly :878 [GW95], whereas it 11 is only known that achieving ratios above 12 ≈ :917 is NP-hard [H˚as01, TSSW00].
    [Show full text]
  • Lecture 16: Constraint Satisfaction Problems 1 Max-Cut SDP
    A Theorist's Toolkit (CMU 18-859T, Fall 2013) Lecture 16: Constraint Satisfaction Problems 10/30/2013 Lecturer: Ryan O'Donnell Scribe: Neal Barcelo 1 Max-Cut SDP Approximation Recall the Max-Cut problem from last lecture. We are given a graph G = (V; E) and our goal is to find a function F : V ! {−1; 1g that maximizes Pr(i;j)∼E[F (vi) 6= F (vj)]. As we saw before, this is NP-hard to do optimally, but we had the following SDP relaxation. 1 1 max avg(i;j)2Ef 2 − 2 yijg s.t. yvv = 1 8v Y = (yij) is symmetric and PSD Note that the second condition is a relaxation of the constraint \9xi s.t. yij = xixj" which is the constraint needed to exactly model the max-cut problem. Further, recall that a matrix being Positive Semi Definitie (PSD) is equivalent to the following characterizations. 1. Y 2 Rn×m is PSD. 2. There exists a U 2 Rm×n such that Y = U T U. 3. There exists joint random variables X1;::: Xn such that yuv = E[XuXv]. ~ ~ ~ ~ 4. There exists vectors U1;:::; Un such that yvw = hUv; Uwi. Using the last characterization, we can rewrite our SDP relaxation as follows 1 1 max avg(i;j)2Ef 2 − 2 h~vi; ~vjig 2 s.t. k~vik2 = 1 Vector ~vi for each vi 2 V First let's make sure this is actually a relaxation of our original problem. To see this, ∗ ∗ consider an optimal cut F : V ! f1; −1g. Then, if we let ~vi = (F (vi); 0;:::; 0), all of the constraints are satisfied and the objective value remains the same.
    [Show full text]
  • Research Statement
    RESEARCH STATEMENT SAM HOPKINS Introduction The last decade has seen enormous improvements in the practice, prevalence, and usefulness of machine learning. Deep nets give our phones ears; matrix completion gives our televisions good taste. Data-driven articial intelligence has become capable of the dicult—driving a car on a highway—and the nearly impossible—distinguishing cat pictures without human intervention [18]. This is the stu of science ction. But we are far behind in explaining why many algorithms— even now-commonplace ones for relatively elementary tasks—work so stunningly well as they do in the wild. In general, we can prove very little about the performance of these algorithms; in many cases provable performance guarantees for them appear to be beyond attack by our best proof techniques. This means that our current-best explanations for how such algorithms can be so successful resort eventually to (intelligent) guesswork and heuristic analysis. This is not only a problem for our intellectual honesty. As machine learning shows up in more and more mission- critical applications it is essential that we understand the guarantees and limits of our approaches with the certainty aorded only by rigorous mathematical analysis. The theory-of-computing community is only beginning to equip itself with the tools to make rigorous attacks on learning problems. The sophisticated learning pipelines used in practice are built layer-by-layer from simpler fundamental statistical inference algorithms. We can address these statistical inference problems rigorously, but only by rst landing on tractable problem de- nitions. In their traditional theory-of-computing formulations, many of these inference problems have intractable worst-case instances.
    [Show full text]
  • Greedy Algorithms for the Maximum Satisfiability Problem: Simple Algorithms and Inapproximability Bounds∗
    SIAM J. COMPUT. c 2017 Society for Industrial and Applied Mathematics Vol. 46, No. 3, pp. 1029–1061 GREEDY ALGORITHMS FOR THE MAXIMUM SATISFIABILITY PROBLEM: SIMPLE ALGORITHMS AND INAPPROXIMABILITY BOUNDS∗ MATTHIAS POLOCZEK†, GEORG SCHNITGER‡ , DAVID P. WILLIAMSON† , AND ANKE VAN ZUYLEN§ Abstract. We give a simple, randomized greedy algorithm for the maximum satisfiability 3 problem (MAX SAT) that obtains a 4 -approximation in expectation. In contrast to previously 3 known 4 -approximation algorithms, our algorithm does not use flows or linear programming. Hence we provide a positive answer to a question posed by Williamson in 1998 on whether such an algorithm 3 exists. Moreover, we show that Johnson’s greedy algorithm cannot guarantee a 4 -approximation, even if the variables are processed in a random order. Thereby we partially solve a problem posed by Chen, Friesen, and Zheng in 1999. In order to explore the limitations of the greedy paradigm, we use the model of priority algorithms of Borodin, Nielsen, and Rackoff. Since our greedy algorithm works in an online scenario where the variables arrive with their set of undecided clauses, we wonder if a better approximation ratio can be obtained by further fine-tuning its random decisions. For a particular information model we show that no priority algorithm can approximate Online MAX SAT 3 ε ε> within 4 + (for any 0). We further investigate the strength of deterministic greedy algorithms that may choose the variable ordering. Here we show that no adaptive priority algorithm can achieve 3 approximation ratio 4 . We propose two ways in which this inapproximability result can be bypassed.
    [Show full text]
  • Chicago Journal of Theoretical Computer Science the MIT Press
    Chicago Journal of Theoretical Computer Science The MIT Press Volume 1997, Article 2 3 June 1997 ISSN 1073–0486. MIT Press Journals, Five Cambridge Center, Cambridge, MA 02142-1493 USA; (617)253-2889; [email protected], [email protected]. Published one article at a time in LATEX source form on the Internet. Pag- ination varies from copy to copy. For more information and other articles see: http://www-mitpress.mit.edu/jrnls-catalog/chicago.html • http://www.cs.uchicago.edu/publications/cjtcs/ • ftp://mitpress.mit.edu/pub/CJTCS • ftp://cs.uchicago.edu/pub/publications/cjtcs • Kann et al. Hardness of Approximating Max k-Cut (Info) The Chicago Journal of Theoretical Computer Science is abstracted or in- R R R dexed in Research Alert, SciSearch, Current Contents /Engineering Com- R puting & Technology, and CompuMath Citation Index. c 1997 The Massachusetts Institute of Technology. Subscribers are licensed to use journal articles in a variety of ways, limited only as required to insure fair attribution to authors and the journal, and to prohibit use in a competing commercial product. See the journal’s World Wide Web site for further details. Address inquiries to the Subsidiary Rights Manager, MIT Press Journals; (617)253-2864; [email protected]. The Chicago Journal of Theoretical Computer Science is a peer-reviewed scholarly journal in theoretical computer science. The journal is committed to providing a forum for significant results on theoretical aspects of all topics in computer science. Editor in chief: Janos Simon Consulting
    [Show full text]
  • Limits of Approximation Algorithms: Pcps and Unique Games (DIMACS Tutorial Lecture Notes)1
    Limits of Approximation Algorithms: PCPs and Unique Games (DIMACS Tutorial Lecture Notes)1 Organisers: Prahladh Harsha & Moses Charikar arXiv:1002.3864v1 [cs.CC] 20 Feb 2010 1Jointly sponsored by the DIMACS Special Focus on Hardness of Approximation, the DIMACS Special Focus on Algorithmic Foundations of the Internet, and the Center for Computational Intractability with support from the National Security Agency and the National Science Foundation. Preface These are the lecture notes for the DIMACS Tutorial Limits of Approximation Algorithms: PCPs and Unique Games held at the DIMACS Center, CoRE Building, Rutgers University on 20-21 July, 2009. This tutorial was jointly sponsored by the DIMACS Special Focus on Hardness of Approximation, the DIMACS Special Focus on Algorithmic Foundations of the Internet, and the Center for Computational Intractability with support from the National Security Agency and the National Science Foundation. The speakers at the tutorial were Matthew Andrews, Sanjeev Arora, Moses Charikar, Prahladh Harsha, Subhash Khot, Dana Moshkovitz and Lisa Zhang. We thank the scribes { Ashkan Aazami, Dev Desai, Igor Gorodezky, Geetha Jagannathan, Alexander S. Kulikov, Darakhshan J. Mir, Alan- tha Newman, Aleksandar Nikolov, David Pritchard and Gwen Spencer for their thorough and meticulous work. Special thanks to Rebecca Wright and Tami Carpenter at DIMACS but for whose organizational support and help, this workshop would have been impossible. We thank Alantha Newman, a phone conversation with whom sparked the idea of this workshop. We thank the Imdadullah Khan and Aleksandar Nikolov for video recording the lectures. The video recordings of the lectures will be posted at the DIMACS tutorial webpage http://dimacs.rutgers.edu/Workshops/Limits/ Any comments on these notes are always appreciated.
    [Show full text]
  • Optimal Inapproximability Results for MAX-CUT and Other 2-Variable Csps?
    Optimal Inapproximability Results for MAX-CUT and Other 2-variable CSPs? Subhash Khot† Guy Kindler School of Mathematics DIMACS Institute for Advanced Study Rutgers University Princeton, NJ Piscataway, NJ [email protected] [email protected] Elchanan Mossel∗ Ryan O’Donnell† Department of Statistics School of Mathematics U.C. Berkeley Institute for Advanced Study Berkeley, CA Princeton, NJ [email protected] [email protected] June 19, 2004 Preliminary version Abstract In this paper we give evidence that it is NP-hard to approximate MAX-CUT to within a factor of αGW +, for all > 0. Here αGW denotes the approximation ratio achieved by the Goemans-Williamson algorithm [22], αGW ≈ .878567. We show that the result follows from two conjectures: a) the Unique Games conjecture of Khot [33]; and, b) a very believable conjecture we call the Majority ∗Supported by a Miller fellowship in Computer Science and Statistics, U.C. Berkeley †This material is based upon work supported by the National Science Foundation under agree- ment Nos. DMS-0111298 and CCR-0324906 respectively. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. Is Stablest conjecture. Our results suggest that the naturally hard “core” of MAX-CUT is the set of instances in which the graph is embedded on a high- dimensional Euclidean sphere and the weight of an edge is given by the squared distance between the vertices it connects. The same two conjectures also imply that it is NP-hard to (β+)-approximate 2 MAX-2SAT, where β ≈ .943943 is the minimum of (2 + π θ)/(3 − cos(θ)) on π ( 2 , π).
    [Show full text]