VIII. Approximation Algorithms: MAX-CUT Problem (Outlook) Thomas Sauerwald

Total Page:16

File Type:pdf, Size:1020Kb

VIII. Approximation Algorithms: MAX-CUT Problem (Outlook) Thomas Sauerwald VIII. Approximation Algorithms: MAX-CUT Problem (Outlook) Thomas Sauerwald Easter 2017 Outline Simple Algorithms for MAX-CUT A Solution based on Semidefinite Programming Summary VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 2 cluster analysis VLSI design 2 b Weighted MAX-CUT: Every edge e 2 E has a non-negative weight w(e) a 1 3 5 4 c Given: Undirected graph G = (V ; E) 2 1 Goal: Find a subset S ⊆ V such that jE(S; V n S)j 2 h e 1 is maximized. g 3 Weighted MAX-CUT: Maximize the weights of edges crossing d P the cut, i.e., maximize w(S) := fu;vg2E(S;V nS) w(fu; vg) S = fa; b; gg w(S) = 18 Applications: Max-Cut MAX-CUT Problem VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 3 cluster analysis VLSI design 2 b Weighted MAX-CUT: Every edge e 2 E has a non-negative weight w(e) a 1 3 5 4 c 2 1 Goal: Find a subset S ⊆ V such that jE(S; V n S)j 2 h e 1 is maximized. g 3 Weighted MAX-CUT: Maximize the weights of edges crossing d P the cut, i.e., maximize w(S) := fu;vg2E(S;V nS) w(fu; vg) S = fa; b; gg w(S) = 18 Applications: Max-Cut MAX-CUT Problem Given: Undirected graph G = (V ; E) VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 3 cluster analysis VLSI design 2 b Weighted MAX-CUT: Every edge e 2 E has a non-negative weight w(e) a 1 3 5 4 c 2 1 2 h e 1 g 3 Weighted MAX-CUT: Maximize the weights of edges crossing d P the cut, i.e., maximize w(S) := fu;vg2E(S;V nS) w(fu; vg) S = fa; b; gg w(S) = 18 Applications: Max-Cut MAX-CUT Problem Given: Undirected graph G = (V ; E) Goal: Find a subset S ⊆ V such that jE(S; V n S)j is maximized. VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 3 cluster analysis VLSI design 2 b a 1 3 5 4 c 2 1 2 h e 1 g 3 Weighted MAX-CUT: Maximize the weights of edges crossing d P the cut, i.e., maximize w(S) := fu;vg2E(S;V nS) w(fu; vg) S = fa; b; gg w(S) = 18 Applications: Max-Cut Weighted MAX-CUT: Every edge e 2 E has a non-negative weight w(e) MAX-CUT Problem Given: Undirected graph G = (V ; E) Goal: Find a subset S ⊆ V such that jE(S; V n S)j is maximized. VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 3 cluster analysis VLSI design 2 b a 1 3 5 4 c 2 1 2 h e 1 g 3 d S = fa; b; gg w(S) = 18 Applications: Max-Cut Weighted MAX-CUT: Every edge e 2 E has a non-negative weight w(e) MAX-CUT Problem Given: Undirected graph G = (V ; E) Goal: Find a subset S ⊆ V such that jE(S; V n S)j is maximized. Weighted MAX-CUT: Maximize the weights of edges crossing P the cut, i.e., maximize w(S) := fu;vg2E(S;V nS) w(fu; vg) VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 3 cluster analysis VLSI design b a g S = fa; b; gg w(S) = 18 Applications: Max-Cut 2 b Weighted MAX-CUT: Every edge e 2 E has a non-negative weight w(e) a 1 3 MAX-CUT Problem 5 4 c Given: Undirected graph G = (V ; E) 2 1 Goal: Find a subset S ⊆ V such that jE(S; V n S)j 2 h e 1 is maximized. g 3 Weighted MAX-CUT: Maximize the weights of edges crossing d P the cut, i.e., maximize w(S) := fu;vg2E(S;V nS) w(fu; vg) VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 3 cluster analysis VLSI design w(S) = 18 Applications: Max-Cut 2 b Weighted MAX-CUT: Every edge e 2 E has a non-negative weight w(e) a 1 3 MAX-CUT Problem 5 4 c Given: Undirected graph G = (V ; E) 2 1 Goal: Find a subset S ⊆ V such that jE(S; V n S)j 2 h e 1 is maximized. g 3 Weighted MAX-CUT: Maximize the weights of edges crossing d P the cut, i.e., maximize w(S) := fu;vg2E(S;V nS) w(fu; vg) S = fa; b; gg VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 3 cluster analysis VLSI design Applications: Max-Cut 2 b Weighted MAX-CUT: Every edge e 2 E has a non-negative weight w(e) a 1 3 MAX-CUT Problem 5 4 c Given: Undirected graph G = (V ; E) 2 1 Goal: Find a subset S ⊆ V such that jE(S; V n S)j 2 h e 1 is maximized. g 3 Weighted MAX-CUT: Maximize the weights of edges crossing d P the cut, i.e., maximize w(S) := fu;vg2E(S;V nS) w(fu; vg) S = fa; b; gg w(S) = 18 VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 3 cluster analysis VLSI design Max-Cut 2 b Weighted MAX-CUT: Every edge e 2 E has a non-negative weight w(e) a 1 3 MAX-CUT Problem 5 4 c Given: Undirected graph G = (V ; E) 2 1 Goal: Find a subset S ⊆ V such that jE(S; V n S)j 2 h e 1 is maximized. g 3 Weighted MAX-CUT: Maximize the weights of edges crossing d P the cut, i.e., maximize w(S) := fu;vg2E(S;V nS) w(fu; vg) S = fa; b; gg w(S) = 18 Applications: VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 3 VLSI design Max-Cut 2 b Weighted MAX-CUT: Every edge e 2 E has a non-negative weight w(e) a 1 3 MAX-CUT Problem 5 4 c Given: Undirected graph G = (V ; E) 2 1 Goal: Find a subset S ⊆ V such that jE(S; V n S)j 2 h e 1 is maximized. g 3 Weighted MAX-CUT: Maximize the weights of edges crossing d P the cut, i.e., maximize w(S) := fu;vg2E(S;V nS) w(fu; vg) S = fa; b; gg w(S) = 18 Applications: cluster analysis VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 3 Max-Cut 2 b Weighted MAX-CUT: Every edge e 2 E has a non-negative weight w(e) a 1 3 MAX-CUT Problem 5 4 c Given: Undirected graph G = (V ; E) 2 1 Goal: Find a subset S ⊆ V such that jE(S; V n S)j 2 h e 1 is maximized. g 3 Weighted MAX-CUT: Maximize the weights of edges crossing d P the cut, i.e., maximize w(S) := fu;vg2E(S;V nS) w(fu; vg) S = fa; b; gg w(S) = 18 Applications: cluster analysis VLSI design VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 3 E [ w(S; V n S)] 2 3 X = E 4 w(fu; vg) 5 fu;vg2E(S;V nS) X = Pr [ fu 2 S \ v 2 (V n S)g [ fu 2 (V n S) \ v 2 Sg ] · w(fu; vg) fu;vg2E X 1 1 = + · w(fu; vg) 4 4 fu;vg2E 1 X 1 ∗ = w(fu; vg) ≥ w : 2 2 fu;vg2E We could employ the same derandomisation used for MAX-3-CNF. Proof: We express the expected weight of the random cut (S; V n S) as: Random Sampling Ex 35.4-3 Suppose that for each vertex v, we randomly and independently place v in S with probability 1=2 and in V n S with probability 1=2. Then this algorithm is a randomized 2-approximation algorithm. VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 4 We could employ the same derandomisation used for MAX-3-CNF. E [ w(S; V n S)] 2 3 X = E 4 w(fu; vg) 5 fu;vg2E(S;V nS) X = Pr [ fu 2 S \ v 2 (V n S)g [ fu 2 (V n S) \ v 2 Sg ] · w(fu; vg) fu;vg2E X 1 1 = + · w(fu; vg) 4 4 fu;vg2E 1 X 1 ∗ = w(fu; vg) ≥ w : 2 2 fu;vg2E Random Sampling Ex 35.4-3 Suppose that for each vertex v, we randomly and independently place v in S with probability 1=2 and in V n S with probability 1=2. Then this algorithm is a randomized 2-approximation algorithm. Proof: We express the expected weight of the random cut (S; V n S) as: VIII. MAX-CUT Problem Simple Algorithms for MAX-CUT 4 We could employ the same derandomisation used for MAX-3-CNF. 2 3 X = E 4 w(fu; vg) 5 fu;vg2E(S;V nS) X = Pr [ fu 2 S \ v 2 (V n S)g [ fu 2 (V n S) \ v 2 Sg ] · w(fu; vg) fu;vg2E X 1 1 = + · w(fu; vg) 4 4 fu;vg2E 1 X 1 ∗ = w(fu; vg) ≥ w : 2 2 fu;vg2E Random Sampling Ex 35.4-3 Suppose that for each vertex v, we randomly and independently place v in S with probability 1=2 and in V n S with probability 1=2. Then this algorithm is a randomized 2-approximation algorithm.
Recommended publications
  • Lecture 20 — March 20, 2013 1 the Maximum Cut Problem 2 a Spectral
    UBC CPSC 536N: Sparse Approximations Winter 2013 Lecture 20 | March 20, 2013 Prof. Nick Harvey Scribe: Alexandre Fr´echette We study the Maximum Cut Problem with approximation in mind, and naturally we provide a spectral graph theoretic approach. 1 The Maximum Cut Problem Definition 1.1. Given a (undirected) graph G = (V; E), the maximum cut δ(U) for U ⊆ V is the cut with maximal value jδ(U)j. The Maximum Cut Problem consists of finding a maximal cut. We let MaxCut(G) = maxfjδ(U)j : U ⊆ V g be the value of the maximum cut in G, and 0 MaxCut(G) MaxCut = jEj be the normalized version (note that both normalized and unnormalized maximum cut values are induced by the same set of nodes | so we can interchange them in our search for an actual maximum cut). The Maximum Cut Problem is NP-hard. For a long time, the randomized algorithm 1 1 consisting of uniformly choosing a cut was state-of-the-art with its 2 -approximation factor The simplicity of the algorithm and our inability to find a better solution were unsettling. Linear pro- gramming unfortunately didn't seem to help. However, a (novel) technique called semi-definite programming provided a 0:878-approximation algorithm [Goemans-Williamson '94], which is optimal assuming the Unique Games Conjecture. 2 A Spectral Approximation Algorithm Today, we study a result of [Trevison '09] giving a spectral approach to the Maximum Cut Problem with a 0:531-approximation factor. We will present a slightly simplified result with 0:5292-approximation factor.
    [Show full text]
  • Tight Integrality Gaps for Lovasz-Schrijver LP Relaxations of Vertex Cover and Max Cut
    Tight Integrality Gaps for Lovasz-Schrijver LP Relaxations of Vertex Cover and Max Cut Grant Schoenebeck∗ Luca Trevisany Madhur Tulsianiz Abstract We study linear programming relaxations of Vertex Cover and Max Cut arising from repeated applications of the \lift-and-project" method of Lovasz and Schrijver starting from the standard linear programming relaxation. For Vertex Cover, Arora, Bollobas, Lovasz and Tourlakis prove that the integrality gap remains at least 2 − " after Ω"(log n) rounds, where n is the number of vertices, and Tourlakis proves that integrality gap remains at least 1:5 − " after Ω((log n)2) rounds. Fernandez de la 1 Vega and Kenyon prove that the integrality gap of Max Cut is at most 2 + " after any constant number of rounds. (Their result also applies to the more powerful Sherali-Adams method.) We prove that the integrality gap of Vertex Cover remains at least 2 − " after Ω"(n) rounds, and that the integrality gap of Max Cut remains at most 1=2 + " after Ω"(n) rounds. 1 Introduction Lovasz and Schrijver [LS91] describe a method, referred to as LS, to tighten a linear programming relaxation of a 0/1 integer program. The method adds auxiliary variables and valid inequalities, and it can be applied several times sequentially, yielding a sequence (a \hierarchy") of tighter and tighter relaxations. The method is interesting because it produces relaxations that are both tightly constrained and efficiently solvable. For a linear programming relaxation K, denote by N(K) the relaxation obtained by the application of the LS method, and by N k(K) the relaxation obtained by applying the LS method k times.
    [Show full text]
  • Optimal Inapproximability Results for MAX-CUT and Other 2-Variable Csps?
    Electronic Colloquium on Computational Complexity, Report No. 101 (2005) Optimal Inapproximability Results for MAX-CUT and Other 2-Variable CSPs? Subhash Khot∗ Guy Kindlery Elchanan Mosselz College of Computing Theory Group Department of Statistics Georgia Tech Microsoft Research U.C. Berkeley [email protected] [email protected] [email protected] Ryan O'Donnell∗ Theory Group Microsoft Research [email protected] September 19, 2005 Abstract In this paper we show a reduction from the Unique Games problem to the problem of approximating MAX-CUT to within a factor of αGW + , for all > 0; here αGW :878567 denotes the approxima- tion ratio achieved by the Goemans-Williamson algorithm [25]. This≈implies that if the Unique Games Conjecture of Khot [36] holds then the Goemans-Williamson approximation algorithm is optimal. Our result indicates that the geometric nature of the Goemans-Williamson algorithm might be intrinsic to the MAX-CUT problem. Our reduction relies on a theorem we call Majority Is Stablest. This was introduced as a conjecture in the original version of this paper, and was subsequently confirmed in [42]. A stronger version of this conjecture called Plurality Is Stablest is still open, although [42] contains a proof of an asymptotic version of it. Our techniques extend to several other two-variable constraint satisfaction problems. In particular, subject to the Unique Games Conjecture, we show tight or nearly tight hardness results for MAX-2SAT, MAX-q-CUT, and MAX-2LIN(q). For MAX-2SAT we show approximation hardness up to a factor of roughly :943. This nearly matches the :940 approximation algorithm of Lewin, Livnat, and Zwick [40].
    [Show full text]
  • Lecture 16: Constraint Satisfaction Problems 1 Max-Cut SDP
    A Theorist's Toolkit (CMU 18-859T, Fall 2013) Lecture 16: Constraint Satisfaction Problems 10/30/2013 Lecturer: Ryan O'Donnell Scribe: Neal Barcelo 1 Max-Cut SDP Approximation Recall the Max-Cut problem from last lecture. We are given a graph G = (V; E) and our goal is to find a function F : V ! {−1; 1g that maximizes Pr(i;j)∼E[F (vi) 6= F (vj)]. As we saw before, this is NP-hard to do optimally, but we had the following SDP relaxation. 1 1 max avg(i;j)2Ef 2 − 2 yijg s.t. yvv = 1 8v Y = (yij) is symmetric and PSD Note that the second condition is a relaxation of the constraint \9xi s.t. yij = xixj" which is the constraint needed to exactly model the max-cut problem. Further, recall that a matrix being Positive Semi Definitie (PSD) is equivalent to the following characterizations. 1. Y 2 Rn×m is PSD. 2. There exists a U 2 Rm×n such that Y = U T U. 3. There exists joint random variables X1;::: Xn such that yuv = E[XuXv]. ~ ~ ~ ~ 4. There exists vectors U1;:::; Un such that yvw = hUv; Uwi. Using the last characterization, we can rewrite our SDP relaxation as follows 1 1 max avg(i;j)2Ef 2 − 2 h~vi; ~vjig 2 s.t. k~vik2 = 1 Vector ~vi for each vi 2 V First let's make sure this is actually a relaxation of our original problem. To see this, ∗ ∗ consider an optimal cut F : V ! f1; −1g. Then, if we let ~vi = (F (vi); 0;:::; 0), all of the constraints are satisfied and the objective value remains the same.
    [Show full text]
  • Greedy Algorithms for the Maximum Satisfiability Problem: Simple Algorithms and Inapproximability Bounds∗
    SIAM J. COMPUT. c 2017 Society for Industrial and Applied Mathematics Vol. 46, No. 3, pp. 1029–1061 GREEDY ALGORITHMS FOR THE MAXIMUM SATISFIABILITY PROBLEM: SIMPLE ALGORITHMS AND INAPPROXIMABILITY BOUNDS∗ MATTHIAS POLOCZEK†, GEORG SCHNITGER‡ , DAVID P. WILLIAMSON† , AND ANKE VAN ZUYLEN§ Abstract. We give a simple, randomized greedy algorithm for the maximum satisfiability 3 problem (MAX SAT) that obtains a 4 -approximation in expectation. In contrast to previously 3 known 4 -approximation algorithms, our algorithm does not use flows or linear programming. Hence we provide a positive answer to a question posed by Williamson in 1998 on whether such an algorithm 3 exists. Moreover, we show that Johnson’s greedy algorithm cannot guarantee a 4 -approximation, even if the variables are processed in a random order. Thereby we partially solve a problem posed by Chen, Friesen, and Zheng in 1999. In order to explore the limitations of the greedy paradigm, we use the model of priority algorithms of Borodin, Nielsen, and Rackoff. Since our greedy algorithm works in an online scenario where the variables arrive with their set of undecided clauses, we wonder if a better approximation ratio can be obtained by further fine-tuning its random decisions. For a particular information model we show that no priority algorithm can approximate Online MAX SAT 3 ε ε> within 4 + (for any 0). We further investigate the strength of deterministic greedy algorithms that may choose the variable ordering. Here we show that no adaptive priority algorithm can achieve 3 approximation ratio 4 . We propose two ways in which this inapproximability result can be bypassed.
    [Show full text]
  • Chicago Journal of Theoretical Computer Science the MIT Press
    Chicago Journal of Theoretical Computer Science The MIT Press Volume 1997, Article 2 3 June 1997 ISSN 1073–0486. MIT Press Journals, Five Cambridge Center, Cambridge, MA 02142-1493 USA; (617)253-2889; [email protected], [email protected]. Published one article at a time in LATEX source form on the Internet. Pag- ination varies from copy to copy. For more information and other articles see: http://www-mitpress.mit.edu/jrnls-catalog/chicago.html • http://www.cs.uchicago.edu/publications/cjtcs/ • ftp://mitpress.mit.edu/pub/CJTCS • ftp://cs.uchicago.edu/pub/publications/cjtcs • Kann et al. Hardness of Approximating Max k-Cut (Info) The Chicago Journal of Theoretical Computer Science is abstracted or in- R R R dexed in Research Alert, SciSearch, Current Contents /Engineering Com- R puting & Technology, and CompuMath Citation Index. c 1997 The Massachusetts Institute of Technology. Subscribers are licensed to use journal articles in a variety of ways, limited only as required to insure fair attribution to authors and the journal, and to prohibit use in a competing commercial product. See the journal’s World Wide Web site for further details. Address inquiries to the Subsidiary Rights Manager, MIT Press Journals; (617)253-2864; [email protected]. The Chicago Journal of Theoretical Computer Science is a peer-reviewed scholarly journal in theoretical computer science. The journal is committed to providing a forum for significant results on theoretical aspects of all topics in computer science. Editor in chief: Janos Simon Consulting
    [Show full text]
  • Optimal Inapproximability Results for MAX-CUT and Other 2-Variable Csps?
    Optimal Inapproximability Results for MAX-CUT and Other 2-variable CSPs? Subhash Khot† Guy Kindler School of Mathematics DIMACS Institute for Advanced Study Rutgers University Princeton, NJ Piscataway, NJ [email protected] [email protected] Elchanan Mossel∗ Ryan O’Donnell† Department of Statistics School of Mathematics U.C. Berkeley Institute for Advanced Study Berkeley, CA Princeton, NJ [email protected] [email protected] June 19, 2004 Preliminary version Abstract In this paper we give evidence that it is NP-hard to approximate MAX-CUT to within a factor of αGW +, for all > 0. Here αGW denotes the approximation ratio achieved by the Goemans-Williamson algorithm [22], αGW ≈ .878567. We show that the result follows from two conjectures: a) the Unique Games conjecture of Khot [33]; and, b) a very believable conjecture we call the Majority ∗Supported by a Miller fellowship in Computer Science and Statistics, U.C. Berkeley †This material is based upon work supported by the National Science Foundation under agree- ment Nos. DMS-0111298 and CCR-0324906 respectively. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. Is Stablest conjecture. Our results suggest that the naturally hard “core” of MAX-CUT is the set of instances in which the graph is embedded on a high- dimensional Euclidean sphere and the weight of an edge is given by the squared distance between the vertices it connects. The same two conjectures also imply that it is NP-hard to (β+)-approximate 2 MAX-2SAT, where β ≈ .943943 is the minimum of (2 + π θ)/(3 − cos(θ)) on π ( 2 , π).
    [Show full text]
  • Maximum Cut and Related Problems Approximating the Maximum
    Proof, beliefs, and algorithms through the lens of sum-of-squares 1 Maximum cut and related problems Figure 1: The graph of Renato Paes Leme’s friends on the social network Orkut, the partition to top and bottom parts is an approximation of the maximum cut while the partition to left and right sides is an approximation of the sparsest cut In this lecture, we will discuss three fundamental NP-hard opti- mization problems that turn out to be intimately related to each other. The first is the problem of finding a maximum cut in a graph. This problem is among the most basic NP-hard problems. It was among 1 the first problems shown to be NP-hard (Karp[ 1972]).1 The second This problem also shows that small syntactic changes in the problem 2 problem is estimating the expansion of a graph. This problem is re- definition can make a big difference lated to isoperimetric questions in geometry, the study of manifolds, for its computational complexity. The problem of finding the minimum cut and a famous inequality by Cheeger[ 1970]. The third problem is in a graph has a polynomial-time estimating mixed norms of linear operators. This problem is more algorithm, whereas a polynomial-time abstract than the previous ones but also more versatile. It is closely algorithm for finding a maximum cut is unlikely because it would imply P=NP. related to a famous inequality by Grothendieck[ 1953]. 2 In the literature, several different terms, e.g., expansion, conductance, and How are these seemingly different problems related? First, it sparsest-cut value, are used to describe turns out that all three of them can be phrased in terms of optimiz- closely related parameter of graphs.
    [Show full text]
  • Streaming Algorithms for Maximum Satisfiability
    Streaming Algorithms for Maximum Satisfiability Hoa T. Vu ∗ Abstract We study the maximum satisfiability (MAX-SAT) problem in the streaming setting. In this problem, we are given m clauses that are disjunctions of literals drawn from n boolean variables and the objective is to find a boolean assignment to the variables that maximizes the number of satisfied clauses. While MAX-SAT is NP-Hard, it admits various polynomial-time approximations. In the streaming model, input clauses are presented in an online fashion and the goal is to obtain a non-trivial approximation while using sublinear o(mn) memory. We present randomized single-pass algorithms that w.h.p 1 yield: • A 1 − approximation using O~(n/3) space 2 and exponential time; • A 3=4− approximation using O~(n/2) space and polynomial time. If there are no duplicate clauses, the space can be further improved to O~(n/). We complement these algorithms with a lower bound showing that any single-pass algorithm that approximates the optimum of MAX-SAT up to a factor better than 3/4 requires Ω(n) space. On the other hand, for the MAX-AND-SAT problem where clauses are conjunctions of literals, we show that any single-pass algorithm that approximates the optimum up to a factor better than 1/2 with success probability at least 1 − 1=(mn)2 must use Ω(mn) space. ∗San Diego State University, [email protected] 1W.h.p. denotes \with high probability". Here, we consider 1 − 1= poly(n) or 1 − 1= poly(m) as high probability.
    [Show full text]
  • Classification on the Computational Complexity of Spin Models
    Classification on the Computational Complexity of Spin Models Shi-Xin Zhang∗ Institute for Advanced Study, Tsinghua University, Beijing 100084, China (Dated: November 12, 2019) In this note, we provide a unifying framework to investigate the computational complexity of classical spin models and give the full classification on spin models in terms of system dimensions, randomness, external magnetic fields and types of spin coupling. We further discuss about the implications of NP-complete Hamiltonian models in physics and the fundamental limitations of all numerical methods imposed by such models. We conclude by a brief discussion on the picture when quantum computation and quantum complexity theory are included. Introduction: Computational complexity classes are hardness of specific numerical methods with the help of very important tools in computer science to character- models in NP(QMA)-complete complexity classes[13, 14]. ize the hardness of problems [1]. After the original work In this work, we elaborate on this idea and show that the of Cook[2], NP-completeness (NPC)[3] and in general the fundamental limitations of numerical approximation are notion of complete problems of corresponding complex- universal to all numerical simulation schemes due to the ity classes have become the dominant approach for ad- existence of NP-hard models. This fact brings invaluable dressing how hard a problem is. For decision problems insight into the numerical study on physics systems in in nondeterministic polynomial time (NP), namely deci- general. Efficient numerical schemes often fail in some sion problems that can be solved in polynomial time on a models (namely the time to get reasonable approxima- nondeterministic Turing machine, they can be classified tion results scales exponentially with the system size), as polynomial time (P), NP-complete or NP-intermediate and in such cases we often attribute the failure to the al- (neither in NPC nor in P).
    [Show full text]
  • On the Mysteries of MAX NAE-SAT
    On the Mysteries of MAX NAE-SAT Joshua Brakensiek∗ Neng Huangy Aaron Potechinz Uri Zwickx Abstract MAX NAE-SAT is a natural optimization problem, closely related to its better-known relative MAX SAT. The approximability status of MAX NAE-SAT is almost completely understood if all clauses have the same size k, for some k ≥ 2. We refer to this problem as MAX NAE-fkg-SAT. For k = 2, it is essentially the celebrated MAX CUT problem. For k = 3, it is related to the MAX CUT problem in graphs that can be fractionally covered by triangles. For k ≥ 4, it is known that an approximation ratio 1 of 1 − 2k−1 , obtained by choosing a random assignment, is optimal, assuming P 6= NP . For every k ≥ 2, 7 an approximation ratio of at least 8 can be obtained for MAX NAE-fkg-SAT. There was some hope, 7 therefore, that there is also a 8 -approximation algorithm for MAX NAE-SAT, where clauses of all sizes are allowed simultaneously. 7 Our main result is that there is no 8 -approximation algorithm for MAX NAE-SAT, assuming the unique games conjecture (UGC). In fact, even for almost satisfiable instances of MAX NAE-f3; 5g-SAT (i.e., MAX NAE-SAT where all clauses have size 3 or 5), the best approximation ratio that can be p 3( 21−4) achieved, assuming UGC, is at most 2 ≈ 0:8739. Using calculus of variations, we extend the analysis of O'Donnell and Wu for MAX CUT to MAX NAE-f3g-SAT. We obtain an optimal algorithm, assuming UGC, for MAX NAE-f3g-SAT, slightly im- proving on previous algorithms.
    [Show full text]
  • Approximation Algorithms for Max Cut Outline Preliminaries Approximation Algorithm for Max Cut with Unit Weights Weighted Max Cut Inapproximability Results Definition
    Outline Preliminaries Approximation algorithm for Max Cut with unit weights Weighted Max Cut Inapproximability Results Preliminaries Approximation algorithm for Max Cut with unit weights Weighted Max Cut Inapproximability Results Angeliki Chalki Network Algorithms and Complexity, 2015 Approximation Algorithms for Max Cut Outline Preliminaries Approximation algorithm for Max Cut with unit weights Weighted Max Cut Inapproximability Results Definition I Max Cut Definition: Given an undirected graph G=(V, E), find a partition of V into two subsets A, B so as to maximize the number of edges having one endpoint in A and the other in B. I Weighted Max Cut Definition: Given an undirected graph G=(V, E) and a positive weght we for each edge, find a partition of V into two subsets A, B so as to maximize the combined weight of the edges having one endpoint in A and the other in B. Angeliki Chalki Network Algorithms and Complexity, 2015 Approximation Algorithms for Max Cut Outline Preliminaries Approximation algorithm for Max Cut with unit weights Weighted Max Cut Inapproximability Results NP- Hardness Max- Cut: I is NP- Hard (reduction from 3-NAESAT). I is the same as finding maximum bipartite subgraph of G. I can be thought of a variant of the 2-coloring problem in which we try to maximize the number of edges consisting of two different colors. I is APX-hard [Papadimitriou, Yannakakis, 1991]. There is no PTAS for Max Cut unless P=NP. Angeliki Chalki Network Algorithms and Complexity, 2015 Approximation Algorithms for Max Cut Outline Preliminaries Approximation algorithm for Max Cut with unit weights Weighted Max Cut Inapproximability Results Max Cut with unit weights The algorithm iteratively updates the cut.
    [Show full text]