A Polynomial-Time Algorithm for Unconstrained Binary Quadratic

Total Page:16

File Type:pdf, Size:1020Kb

A Polynomial-Time Algorithm for Unconstrained Binary Quadratic A Polynomial-Time Algorithm for Unconstrained Binary Quadratic Optimization Juan Ignacio Mulero-Mart´ınez Department of Automatic Control, Electrical Engineering and Electronic Technology, Technical University of Cartagena, Campus Muralla del Mar 30203, Spain. E-mail: [email protected] Abstract In this paper, an exact algorithm in polynomial time is developed to solve unre- 15 stricted binary quadratic programs. The computational complexity is O n 2 , although very conservative, it is sufficient to prove that this minimization prob- lem is in the complexity class P . The implementation aspects are also described in detail with a special emphasis on the transformation of the quadratic program into a linear program that can be solved in polynomial time. The algorithm was implemented in MATLAB and checked by generating five million matrices of arbitrary dimensions up to 30 with random entries in the range [−50, 50]. All the experiments carried out have revealed that the method works correctly. Keywords: Unconstrained binary quadratic programming, global optimization, complexity measures, and classes 1. Introduction The unconstrained binary quadratic programming (UBQP) problem occurs in many computer vision, image processing, and pattern recognition applica- tions, including but not limited to image segmentation/pixel labeling, image registration/matching, image denoising/restoration, partitioning of graphs, data arXiv:2005.07030v6 [cs.DS] 30 Jan 2021 clustering, and data classification. Much of the algorithmic progress at UBQP has been due to the computer vision research community, [1], [2], [3], [4]. For example, the objective functions in the UBQP problem are a class of energy functions that are widely useful and have had very striking success in computer vision (see [5] for a recent survey). The UBQP problem dates back to the 1960s where pseudo-boolean functions and binary quadratic optimization were introduced by Hammer and Rudeanu, [6]. Since then, it has become an active research area in Discrete Mathematics and Complexity Theory (surveys in [7] and in [8], give a good account of this topic). Currently, this problem has become a major problem in recent years due to the discovery that UBQP represents a unifying framework for a very wide variety Preprint submitted to ArXiv February 2, 2021 of combinatorial optimization problems. In particular, as pointed out in [9] the UBQP model includes the following important combinatorial optimization prob- lems: maximum cut problems, maximum, click problems, maximum indepen- dent set problems, graph coloring problems, satisfiability problems, quadratic knapsack problems, etc. The UBQP problem is generally NP-Hard, [10] (you can use the UBQP problem to optimize the number of constraints satisfied on a 0/1 integer Pro- gramming instance, one of the Karp’s 21 NP-complete problems). Only a few special cases are solvable in polynomial time. In fact, the problem of deter- mining local minima of pseudo-boolean functions is found in the PLS-complete class (the class of hardest polynomial local search problems), [10], [11], and in general, local search problems are found in the EXP class, [12], [13], [14], [15], [16], [17]. Global optimization methods are NP-complete. To obtain a global optimal solution by exact methods (generally based on branch and bound strategies), the following techniques should be highlighted: the combinatorial variable elimination algorithm1, [6], [18], [19]; the continuous relaxation with linearization (where the requirement of binary variables is replaced by a weaker restriction of membership to the closed interval [0, 1]), [20], [21]; the posiform transformations, [22], [23]; the conflict graphs (the connection between posi- form minimization problem and the maximum weighted stability), [24], [25], [26], [27], [28], [29]; the linearization strategies such as standard linearization (consisting in transforming the minimization problem into an equivalent linear 0–1 programming problem), [30], [31], [32], [33], Glover method, [34], improved linearization strategy, [35], (the reader is referred to [36] for a recent comparison of these methods); semidefinite-based solvers, [37] (and the references therein), et cetera. The reader is referred to the survey [38] for a detailed description of these techniques until 2014. Many researchers have extensively studied the UBQP problem, however, up to date nobody has succeeded in developing an algorithm running in polynomial time. We claim, and this is the main contribution of this work, that UBQP is in the complexity class P . The main idea is to transform the UBQP problem into a linear programming (LP) problem, that is solved in polynomial time. We guar- antee that the minimum of the LP problem is also the minimum of the UBQP problem. We also provide the implementation details of the algorithm, moti- vated by the following aspects that any work on discrete optimization should present: (i) Describe in detail the algorithms to be able to reproduce the experiments and even improve them in the future. (ii) Providing the source code so that it is openly available to the scientific community: Interestingly, a recent study by Dunning has revealed that only 4% of papers on heuristic methods provide the source code, [39]. 1This algorithm is in the class EXP and only runs in polynomial time for pseudo-Boolean functions associated with graphs of bounded tree-width. 2 (iii) Establish random test problems with an arbitrary input size. Here it is important to indicate the ranges of the parameters in the UBQP problem. This procedure has been implemented in MATLAB (source code is provided as supplementary material) and checked with five million random matrices up to dimension 30, with entries in the range [−50, 50]. An advantage of this algorithm is its modularity concerning the dimension of the problem: the set of linear constraints of the equivalent linear programming problem is fixed for a constant arbitrary dimension regardless of the objective function of the quadratic problem. Finally, we highlight that the objective of the work is not the speed of resolution of the problem but simply to show that the UBQP problem can be solved in polynomial time. Future works will analyze large-scale UBQP problems as well as the design of more efficient polynomial- time algorithms. The paper is organized as follows: Section 2 describes the relaxation process for the UBQP problem. Next in section 3, the main result about the equiva- lence of the UBQP problem with a linear programming problem is addressed. For simplicity in the exposition, the case n = 3 is presented first and then it is generalized for n > 3. The computational complexity in both time and space is analyzed in section 4. The implementation features about primary variables, transformation of the objective function, and convexity and consistency con- straints are treated in section 5. The design of the experiment for testing the solution is presented in section 6. Finally, section 7 is dedicated to discussing the main aspects presented in this work as well as possible future works. 2. Background Let B = {0, 1} and f : Bn → R be a quadratic objective function defined as f (x) = xT Qx + bT x with Q = QT ∈ Rn×n, diag (Q) = (0,..., 0) and b ∈ Rn. The UBQP problem is defined as follows: UBQP: minx∈B f (x). The objective function f is usually called a quadratic pseudo-boolean func- tion, i.e. multilinear polynomials in binary unknowns. These functions repre- sent a class of energy functions that are widely useful and have had very striking success in computer vision (see [5] for a recent survey). n This problem can naturally be extended to the solid hypercube Hn = [0, 1] spanned by Bn. The extension of the pseudo-Boolean function f : Bn → R is a pol function f : Hn → R that coincides with f at the vertices of Hn. Rosenberg discovered an attractive feature regarding the multilinear polynomial extension pol pol f , [40]: the minimum of f is always attained at a vertex of Hn, and hence, that this minimum coincides with the minimum of f. From this, our optimization problem is reduced to the following relaxed quadratic problem: (Pn): minx∈Hn f (x). 3 3. Main Result In this section, we prove that Problem (P) can be reduced to a Linear Pro- gramming Problem. 3.1. A Simple Case We begin with the simple case of minimization of a quadratic form f (x) in the cube H3. Here the minimization problem is stated as follows: (P3): minx∈H3 f (x). 3 1 3 Associated with the cube H3 we have a map φ : H3 → [0, 2] × 0, 2 defined as x1+2x1x2+x2 2 x1+2x1x3+x3 2 x2+2x2x3+x3 φ (x , x , x )= − 2 . 1 2 3 x1 2x1x2+x2 − 2 x1 2x1x3+x3 − 2 x2 2x2x3+x3 2 An important fact is that the cube H3 can be expressed as a convex hull of a finite set of vertices V = {0, 1}3. For simplicity, we enumerate the vertices in V as p1,p2,...,p8 so that H3 can be written as convex combinations of those vertices, i.e. H3 = conv (V ), where 8 8 conv (V )= αipi : ai ≥ 0, αi =1 . (i=1 i=1 ) X X 6 6 The map φ is a composition of the maps α : H3 → [0, 1] and β : [0, 1] → 3 1 3 [0, 2] × 0, 2 defined as α (x) = (x1, x1x2, x1x3, x2, x2x3, x3) , (1) 6 β (y)= E3y for every y ∈ [0, 1] , (2) where E3 is 12 0100 10 2001 1 00 0121 E = . (3) 3 2 1 −20100 1 0 −20 0 1 00 01 −2 1 More specifically φ = β ◦ α. As a summary, the maps φ, α, and β are represented in the diagram of Figure 1. The map φ is composition of α with β, i.e. φ = β ◦ α, where α can be T T built from H3 as a selection of the Kronecker productx ˜ ⊗ x˜ withx ˜ = 1, x 4 x1+2x1x2+x2 x1+2x1x3+x3 x2+2x2x3+x3 x1−2x1x2+x2 x1−2x1x3+x3 x2−2x2x3+x3 w = φ(x)= 2 , 2 , 2 , 2 , 2 , 2 w x u φ 12 u13 β(y)= E3y C H n 3 u β(α(x)) = φ(x)= 23 v12 y +2y + y v13 1 2 4 α y +2y + y β v23 1 3 6 1 y4 +2y5 + y6 2 = β(y) y1 − 2y2 + y4 y1 − 2y3 + y6 α(H ) y 3 y4 − 2y5 + y6 α(x) = (x1, x1x2, x1x3, x2, x2x3 , x3) Figure 1: Diagram for the maps φ, α, and β.
Recommended publications
  • Paradigms of Combinatorial Optimization
    W657-Paschos 2.qxp_Layout 1 01/07/2014 14:05 Page 1 MATHEMATICS AND STATISTICS SERIES Vangelis Th. Paschos Vangelis Edited by This updated and revised 2nd edition of the three-volume Combinatorial Optimization series covers a very large set of topics in this area, dealing with fundamental notions and approaches as well as several classical applications of Combinatorial Optimization. Combinatorial Optimization is a multidisciplinary field, lying at the interface of three major scientific domains: applied mathematics, theoretical computer science, and management studies. Its focus is on finding the least-cost solution to a mathematical problem in which each solution is associated with a numerical cost. In many such problems, exhaustive search is not feasible, so the approach taken is to operate within the domain of optimization problems, in which the set of feasible solutions is discrete or can be reduced to discrete, and in which the goal is to find the best solution. Some common problems involving combinatorial optimization are the traveling salesman problem and the Combinatorial Optimization minimum spanning tree problem. Combinatorial Optimization is a subset of optimization that is related to operations research, algorithm theory, and computational complexity theory. It 2 has important applications in several fields, including artificial intelligence, nd mathematics, and software engineering. Edition Revised and Updated This second volume, which addresses the various paradigms and approaches taken in Combinatorial Optimization, is divided into two parts: Paradigms of - Paradigmatic Problems, which discusses several famous combinatorial optimization problems, such as max cut, min coloring, optimal satisfiability TSP, Paradigms of etc., the study of which has largely contributed to the development, the legitimization and the establishment of Combinatorial Optimization as one of the most active current scientific domains.
    [Show full text]
  • Solving Non-Boolean Satisfiability Problems with Stochastic Local Search
    Solving Non-Boolean Satisfiability Problems with Stochastic Local Search: A Comparison of Encodings ALAN M. FRISCH †, TIMOTHY J. PEUGNIEZ, ANTHONY J. DOGGETT and PETER W. NIGHTINGALE§ Artificial Intelligence Group, Department of Computer Science, University of York, York YO10 5DD, UK. e-mail: [email protected], [email protected] Abstract. Much excitement has been generated by the success of stochastic local search procedures at finding solutions to large, very hard satisfiability problems. Many of the problems on which these procedures have been effective are non-Boolean in that they are most naturally formulated in terms of variables with domain sizes greater than two. Approaches to solving non-Boolean satisfiability problems fall into two categories. In the direct approach, the problem is tackled by an algorithm for non-Boolean problems. In the transformation approach, the non-Boolean problem is reformulated as an equivalent Boolean problem and then a Boolean solver is used. This paper compares four methods for solving non-Boolean problems: one di- rect and three transformational. The comparison first examines the search spaces confronted by the four methods then tests their ability to solve random formulas, the round-robin sports scheduling problem and the quasigroup completion problem. The experiments show that the relative performance of the methods depends on the domain size of the problem, and that the direct method scales better as domain size increases. Along the route to performing these comparisons we make three other contri- butions. First, we generalise Walksat, a highly-successful stochastic local search procedure for Boolean satisfiability problems, to work on problems with domains of any finite size.
    [Show full text]
  • Random Θ(Log N)-Cnfs Are Hard for Cutting Planes
    Random Θ(log n)-CNFs are Hard for Cutting Planes Noah Fleming Denis Pankratov Toniann Pitassi Robert Robere University of Toronto University of Toronto University of Toronto University of Toronto noahfl[email protected] [email protected] [email protected] [email protected] Abstract—The random k-SAT model is the most impor- lower bounds for random k-SAT formulas in a particular tant and well-studied distribution over k-SAT instances. It is proof system show that any complete and efficient algorithm closely connected to statistical physics and is a benchmark for based on the proof system will perform badly on random satisfiability algorithms. We show that when k = Θ(log n), any Cutting Planes refutation for random k-SAT requires k-SAT instances. Furthermore, since the proof complexity exponential size in the interesting regime where the number of lower bounds hold in the unsatisfiable regime, they are clauses guarantees that the formula is unsatisfiable with high directly connected to Feige’s hypothesis. probability. Remarkably, determining whether or not a random SAT Keywords-Proof complexity; random k-SAT; Cutting Planes; instance from the distribution F(m; n; k) is satisfiable is controlled quite precisely by the ratio ∆ = m=n, which is called the clause density. A simple counting argument shows I. INTRODUCTION that F(m; n; k) is unsatisfiable with high probability for The Satisfiability (SAT) problem is perhaps the most ∆ > 2k ln 2. The famous satisfiability threshold conjecture famous problem in theoretical computer science, and sig- asserts that there is a constant ck such that random k-SAT nificant effort has been devoted to understanding randomly formulas of clause density ∆ are almost certainly satisfiable generated SAT instances.
    [Show full text]
  • On Total Functions, Existence Theorems and Computational Complexity
    Theoretical Computer Science 81 (1991) 317-324 Elsevier Note On total functions, existence theorems and computational complexity Nimrod Megiddo IBM Almaden Research Center, 650 Harry Road, Sun Jose, CA 95120-6099, USA, and School of Mathematical Sciences, Tel Aviv University, Tel Aviv, Israel Christos H. Papadimitriou* Computer Technology Institute, Patras, Greece, and University of California at Sun Diego, CA, USA Communicated by M. Nivat Received October 1989 Abstract Megiddo, N. and C.H. Papadimitriou, On total functions, existence theorems and computational complexity (Note), Theoretical Computer Science 81 (1991) 317-324. wondeterministic multivalued functions with values that are polynomially verifiable and guaran- teed to exist form an interesting complexity class between P and NP. We show that this class, which we call TFNP, contains a host of important problems, whose membership in P is currently not known. These include, besides factoring, local optimization, Brouwer's fixed points, a computa- tional version of Sperner's Lemma, bimatrix equilibria in games, and linear complementarity for P-matrices. 1. The class TFNP Let 2 be an alphabet with two or more symbols, and suppose that R G E*x 2" is a polynomial-time recognizable relation which is polynomially balanced, that is, (x,y) E R implies that lyl sp(lx()for some polynomial p. * Research supported by an ESPRIT Basic Research Project, a grant to the Universities of Patras and Bonn by the Volkswagen Foundation, and an NSF Grant. Research partially performed while the author was visiting the IBM Almaden Research Center. 0304-3975/91/$03.50 @ 1991-Elsevier Science Publishers B.V.
    [Show full text]
  • Arxiv:1910.02319V2 [Cs.CV] 10 Nov 2020
    Covariance-free Partial Least Squares: An Incremental Dimensionality Reduction Method Artur Jordao, Maiko Lie, Victor Hugo Cunha de Melo and William Robson Schwartz Smart Sense Laboratory, Computer Science Department Federal University of Minas Gerais, Brazil Email: {arturjordao, maikolie, victorhcmelo, william}@dcc.ufmg.br Abstract latent space [23][8]. Previous works have demonstrated that dimensionality reduction can improve not only com- Dimensionality reduction plays an important role in putational cost but also the effectiveness of the data rep- computer vision problems since it reduces computational resentation [19] [35] [33]. In this context, Partial Least cost and is often capable of yielding more discriminative Squares (PLS) has presented remarkable results when com- data representation. In this context, Partial Least Squares pared to other dimensionality reduction methods [33]. This (PLS) has presented notable results in tasks such as image is mainly due to the criterion through which PLS finds the classification and neural network optimization. However, low dimensional space, which is by capturing the relation- PLS is infeasible on large datasets, such as ImageNet, be- ship between independent and dependent variables. An- cause it requires all the data to be in memory in advance, other interesting aspect of PLS is that it can operate as a fea- which is often impractical due to hardware limitations. Ad- ture selection method, for instance, by employing Variable ditionally, this requirement prevents us from employing PLS Importance in Projection (VIP) [24]. The VIP technique on streaming applications where the data are being contin- employs score matrices yielded by NIPALS (the standard uously generated. Motivated by this, we propose a novel algorithm used for traditional PLS) to compute the impor- incremental PLS, named Covariance-free Incremental Par- tance of each feature based on its contribution to the gener- tial Least Squares (CIPLS), which learns a low-dimensional ation of the latent space.
    [Show full text]
  • A Tour of the Complexity Classes Between P and NP
    A Tour of the Complexity Classes Between P and NP John Fearnley University of Liverpool Joint work with Spencer Gordon, Ruta Mehta, Rahul Savani Simple stochastic games T A two player game I Maximizer (box) wants to reach T I Minimizer (triangle) who wants to avoid T I Nature (circle) plays uniformly at random Simple stochastic games 0.5 1 1 0.5 0.5 0 Value of a vertex: I The largest probability of winning that max can ensure I The smallest probability of winning that min can ensure Computational Problem: find the value of each vertex Simple stochastic games 0.5 1 1 0.5 0.5 0 Is the problem I Easy? Does it have a polynomial time algorithm? I Hard? Perhaps no such algorithm exists This is currently unresolved Simple stochastic games 0.5 1 1 0.5 0.5 0 The problem lies in NP \ co-NP I So it is unlikely to be NP-hard But there are a lot of NP-intermediate classes... This talk: whare are these complexity classes? Simple stochastic games Solving a simple-stochastic game lies in NP \ co-NP \ UP \ co-UP \ TFNP \ PPP \ PPA \ PPAD \ PLS \ CLS \ EOPL \ UEOPL Simple stochastic games Solving a simple-stochastic game lies in NP \ co-NP \ UP \ co-UP \ TFNP \ PPP \ PPA \ PPAD \ PLS \ CLS \ EOPL \ UEOPL This talk: whare are these complexity classes? TFNP PPAD PLS CLS Complexity classes between P and NP NP P There are many problems that lie between P and NP I Factoring, graph isomorphism, computing Nash equilibria, local max cut, simple-stochastic games, ..
    [Show full text]
  • Introduction to SAT History, Algorithms, Practical Considerations
    Introduction to SAT History, Algorithms, Practical considerations Daniel Le Berre 1 CNRS - Universit´ed'Artois SAT-SMT summer school Semmering, Austria, July 10-12, 2014 1. Contains material provided by Joao Marques Silva, Armin Biere, Takehide Soh 1/117 Agenda Introduction to SAT A bit of history (DP, DPLL) The CDCL framework (CDCL is not DPLL) Grasp From Grasp to Chaff Chaff Anatomy of a modern CDCL SAT solver Nearby SAT MaxSat Pseudo-Boolean Optimization MUS SAT in practice : working with CNF 2/117 Disclaimer I Not a complete view of the subject I Limited to one branch of SAT research (CDCL solvers) I From an AI background point of view I From a SAT solver designer I For a broader picture of the area, see the handbook edited in 2009 by the community 3/117 Disclaimer : continued I Remember that the best solvers for practical SAT solving in the 90's where based on local search or randomized DPLL I This decade has been the one of Conflict Driven Clause Learning solvers. I The next one may rise a new kind of solvers (parallel architectures) ... 4/117 Agenda Introduction to SAT A bit of history (DP, DPLL) The CDCL framework (CDCL is not DPLL) Grasp From Grasp to Chaff Chaff Anatomy of a modern CDCL SAT solver Nearby SAT MaxSat Pseudo-Boolean Optimization MUS SAT in practice : working with CNF 5/117 Context : SAT receives much attention since a decade Why are we all here today ? I Most companies doing software or hardware verification are now using SAT solvers.
    [Show full text]
  • Pure Nash Equilibria and PLS-Completeness∗
    CS364A: Algorithmic Game Theory Lecture #19: Pure Nash Equilibria and PLS-Completeness∗ Tim Roughgardeny December 2, 2013 1 The Big Picture We now have an impressive list of tractability results | polynomial-time algorithms and quickly converging learning dynamics | for several equilibrium concepts in several classes of games. Such tractability results, especially via reasonably natural learning processes, lend credibility to the predictive power of these equilibrium concepts. See also Figure 1. [Lecture 17] In general games, no-(external)-regret dynamics converges quickly to an approximate coarse correlated equilibrium (CCE). ∗ c 2013, Tim Roughgarden. These lecture notes are provided for personal use only. See my book Twenty Lectures on Algorithmic Game Theory, published by Cambridge University Press, for the latest version. yDepartment of Computer Science, Stanford University, 462 Gates Building, 353 Serra Mall, Stanford, CA 94305. Email: [email protected]. CCE tractable CE in general MNE tractable in 2-player 0-sum games PNE tractable in symmetric routing/congestion games Figure 1: The hierarchy of solution concepts. 1 [Lecture 18] In general games, no-swap-regret dynamics converges quickly to an ap- proximate correlated equilibrium (CE). [Lecture 18] In two-player zero-sum games, no-(external)-regret dynamics converges quickly to an approximate mixed Nash equilibrium (MNE). [Lecture 16] In atomic routing games that are symmetric | that is, all players share the same source and sink | -best-response dynamics converges quickly to an approximate pure Nash equilibrium (PNE). Also, Problem 32 shows how to use linear programming to compute an exact CCE or CE of a general game, or an exact MNE of a two-player zero-sum game.
    [Show full text]
  • The PLS Regression Model: Algorithms and Application to Chemometric Data
    Universita` degli Studi di Udine Dipartimento di Matematica e Informatica Dottorato di Ricerca in Informatica Ciclo xxv Ph.D. Thesis The PLS regression model: algorithms and application to chemometric data Candidate: Supervisor: Del Zotto Stefania prof. Roberto Vito Anno Accademico 2012-2013 Author's e-mail: [email protected] Author's address: Dipartimento di Matematica e Informatica Universit`adegli Studi di Udine Via delle Scienze, 206 33100 Udine Italia To all the kinds of learners. Abstract Core argument of the Ph.D. Thesis is Partial Least Squares (PLS), a class of techniques for modelling relations between sets of observed quantities through la- tent variables. With this trick, PLS can extract useful informations also from huge data and it manages computational complexity brought on such tall and fat datasets. Indeed, the main strength of PLS is that it performs accurately also with more vari- ables than instances and in presence of collinearity. Aim of the thesis is to give an incremental and complete description of PLS, to- gether with its tasks, advantages and drawbacks, starting from an overview of the discipline where PLS takes place up to the application of PLS to a real dataset, moving through a critical comparison with alternative techniques. For this reason, after a brief introduction of both Machine Learning and a corresponding general working procedure, first Chapters explain PLS theory and present concurrent meth- ods. Then, PLS regression is computed on a measured dataset and concrete results are evaluated. Conclusions are made with respect to both theoretical and practical topics and future perspectives are highlighted.
    [Show full text]
  • Total NP Functions I: Complexity and Reducibility
    NP Functions Total NP Functions I: Complexity and Reducibility Sam Buss (UCSD) [email protected] Newton Institute, March 2012 Sam Buss TFNP NP Functions Total NP Functions — TFNP [Meggido-Papadimitriou’91, Papadimitriou’94]. Definition TFNP, the class of Total NP Functions, is the set of polynomial time relations R(x, y) such that - R(x, y) is polynomial time and honest (so, |y| = |x|O(1)), - R is total, i.e., for all x, there exists y s.t. R(x, y). Thm. If TFNP problems are in FP (p-time), then NP ∩ coNP = FP. Pf. If (∃y ≤ s)A(x, y) ↔ (∀y ≤ t)B(x, y) is in NP ∩ coNP, then A(x, y) ∨¬B(x, y) defines a TFNP predicate. Thus, any NP ∩ coNP predicate gives a TFNP problem. These are called F (NP ∩ coNP) functions. NP Also open: Does TFNP = FP ? Sam Buss TFNP NP Functions These two papers emphasized especially the following problems: 1. Polynomial Local Search (PLS). Functions based on finding a local minimum. [JPY’88] 2. PPAD. Functions which are guaranteed total by Sperner’s Lemma, or Brouwer’s Fixpoint Theorem, or similar problems. “PPAD” = “Polynomial Parity Argument in Directed graphs”. 3. Nash Equilibrium (Bimatrix Equilibrium) and Positive Linear Complementarity Problem (P-LCP). Solutions can be found with Lemke’s algorithm (pivoting). [DGP’08] says: “Motivated mainly by ... Nash equilibria” Sam Buss TFNP NP Functions Polynomial Local Search (PLS) Inspired by Dantzig’s algorithm and other local search algorithms: Definition (Johnson, Papadimitriou, Yanakakis’88) A PLS problem, y = f (x), consists of polynomial time functions: initial point i(x), neighboring point N(x, s), and cost function c(x, s), a polynomial time predicate for feasibility F (x, s), and a polynomial bound b(x) such that 0.
    [Show full text]
  • Combinatorial Problems and Search
    STOCHASTIC LOCAL SEARCH FOUNDATIONS AND APPLICATIONS Introduction: Combinatorial Problems and Search Holger H. Hoos & Thomas St¨utzle Outline 1. Combinatorial Problems 2. Two Prototypical Combinatorial Problems 3. Computational Complexity 4. Search Paradigms 5. Stochastic Local Search Stochastic Local Search: Foundations and Applications 2 Combinatorial Problems Combinatorial problems arise in many areas of computer science and application domains: I finding shortest/cheapest round trips (TSP) I finding models of propositional formulae (SAT) I planning, scheduling, time-tabling I internet data packet routing I protein structure prediction I combinatorial auctions winner determination Stochastic Local Search: Foundations and Applications 3 Combinatorial problems involve finding a grouping, ordering, or assignment of a discrete, finite set of objects that satisfies given conditions. Candidate solutions are combinations of solution components that may be encountered during a solutions attempt but need not satisfy all given conditions. Solutions are candidate solutions that satisfy all given conditions. Stochastic Local Search: Foundations and Applications 4 Example: I Given: Set of points in the Euclidean plane I Objective: Find the shortest round trip Note: I a round trip corresponds to a sequence of points (= assignment of points to sequence positions) I solution component: trip segment consisting of two points that are visited one directly after the other I candidate solution: round trip I solution: round trip with minimal length Stochastic Local Search: Foundations and Applications 5 Problem vs problem instance: I Problem: Given any set of points X , find a shortest round trip I Solution: Algorithm that finds shortest round trips for any X I Problem instance: Given a specific set of points P, find a shortest round trip I Solution: Shortest round trip for P Technically, problems can be formalised as sets of problem instances.
    [Show full text]
  • Evolving Combinatorial Problem Instances That Are Difficult to Solve
    Evolving Combinatorial Problem Instances That Are Difficult to Solve Jano I. van Hemert http://www.vanhemert.co.uk/ National e-Science Centre, University of Edinburgh, United Kingdom Abstract This paper demonstrates how evolutionary computation can be used to acquire diffi- cult to solve combinatorial problem instances. As a result of this technique, the corre- sponding algorithms used to solve these instances are stress-tested. The technique is applied in three important domains of combinatorial optimisation, binary constraint satisfaction, Boolean satisfiability, and the travelling salesman problem. The problem instances acquired through this technique are more difficult than the ones found in popular benchmarks. In this paper, these evolved instances are analysed with the aim to explain their difficulty in terms of structural properties, thereby exposing the weak- nesses of corresponding algorithms. Keywords Binary constraint satisfaction, travelling salesman, Boolean satisfiability, 3-SAT,diffi- cult combinatorial problems, problem hardness, evolving problems. 1 Introduction With the current state of the complexity of algorithms that solve combinatorial prob- lems, the task of analysing computational complexity (Papadimitriou, 1994) has be- come a task too difficult to perform by hand. To measure progress in the field of al- gorithm development, many studies now consist of performing empirical performance tests to either show the difference between the performance of several algorithms or to show improvements over previous versions. The existence of benchmarks readily available for download have contributed to the popularity of such studies. Unfortu- nately, in many of these studies, the performance is measured in a black box manner by running algorithms blindly on benchmarks. This reduces the contribution to some performance results, or perhaps a new found optimum for one or more benchmark problems.Whatisarguedhereisthatamore interesting contribution can be made by showing performance results in relation to structural properties.
    [Show full text]