Equilibria, Fixed Points, and Computational Complexity – Nevanlinna Prize Lecture

Total Page:16

File Type:pdf, Size:1020Kb

Equilibria, Fixed Points, and Computational Complexity – Nevanlinna Prize Lecture P. I. C. M. – 2018 Rio de Janeiro, Vol. 1 (147–210) EQUILIBRIA, FIXED POINTS, AND COMPUTATIONAL COMPLEXITY – NEVANLINNA PRIZE LECTURE C D Abstract The concept of equilibrium, in its various forms, has played a central role in the development of Game Theory and Economics. The mathematical properties and computational complexity of equilibria are also intimately related to mathemat- ical programming, online learning, and fixed point theory. More recently, equi- librium computation has been proposed as a means to learn generative models of high-dimensional distributions. In this paper, we review fundamental results on minimax equilibrium and its re- lationship to mathematical programming and online learning. We then turn to Nash equilibrium, reviewing some of our work on its computational intractability. This intractability is of an unusual kind. While computing Nash equilibrium does belong to the well-known complexity class NP, Nash’s theorem—that Nash equilibrium is guaranteed to exist—makes it unlikely that the problem is NP-complete. We show instead that it is as hard computationally as computing Brouwer fixed points, in a precise technical sense, giving rise to the complexity class PPAD, a subclass of total search problems in NP that we will define. The intractability of Nash equilibrium makes it unlikely to always arise in practice, so we study special circumstances where it becomes tractable. We also discuss modern applications of equilibrium computation, presenting recent progress and several open problems in the train- ing of Generative Adversarial Networks. Finally, we take a broader look at the complexity of total search problems in NP, discussing their intimate relationship to fundamental problems in a range of fields, including Combinatorics, Discrete and Continuous Optimization, Social Choice Theory, Economics, and Cryptography. We overview recent work and present a host of open problems. 1 Introduction: 90 years of the minimax theorem Humans strategize when making decisions. Sometimes, they optimize their decisions against a stationary, stochastically changing, or completely unpredictable environment. Optimizing a decision in such a situation falls in the realm of Optimization Theory. Other times, a decision is pit against those made by others who are also thinking strate- gically. Understanding the outcome of the interaction among strategic decision-makers falls in the realm of Game Theory, which studies situations of conflict, competition or cooperation as these may arise in everyday life, game-playing, politics, international MSC2010: primary 68Q17; secondary 05C60. 147 148 CONSTANTINOS DASKALAKIS relations, and many other multi-agent settings, such as interactions among species (Os- borne and Rubinstein [1994]). A founding stone in the development of Game Theory was von Neumann’s minimax theorem (von Neumann [1928]), whose statement is as simple as it is remarkable: if X Rn and Y Rm are compact and convex subsets of Euclidean space, and f (x; y) is a continuous function that is convex in x X and concave in y Y, then 2 2 (1) min max f (x; y) = max min f (x; y): x X y Y y Y x X 2 2 2 2 To see how (1) relates to Game Theory, let us imagine two players, “Min” and “Max,” engaging in a strategic interaction. Min can choose any strategy in X, Max can choose any strategy in Y, and f (x; y) is the payment of Min to Max when they choose strate- gies x and y respectively. Such interaction is called a two-player zero-sum game, as the sum of players’ payoffs is zero for any choice of strategies. Let us now offer two interpretations of (1): • First, it confirms an unusual fact. For some scalar value v, if for all strategies y of Max, there exists a strategy x of Min such that f (x; y) v, that is Min pays Ä no more than v to Max, then there exists a universal strategy x for Min such that for all strategies y of Max it holds that f (x; y) v. Ä • In particular, if x is the optimal solution to the left hand side of (1), y is the optimal solution to the right hand side of (1), and v is the optimal value of both sides, then if Min plays x and Max plays y then Min pays v to Max and neither Min can decrease this payment nor can Max increase this payment by unilaterally changing their strategies. Such a pair of strategies is called a minimax equilibrium, and von Neumann’s theorem confirms that it always exists! One of the simplest two-player zero-sum games that were ever played is rock-paper- scissors, which every kid must have played in the school yard, or at least they used to before electronic games took over. In this game, both players have the same three actions, “rock,” “paper,” and “scissors,” to choose from. Their actions are compared, and each player is rewarded or charged, or, if they choose the same action, the players tie. Figure 1 assigns numerical payoffs to the different outcomes of the game. In this tabular representation, the rows represent the actions available to Min, the columns represent the actions available to Max, and every square of the table is the payment of Min to Max for a pair of actions. For example, if Min chooses “rock” and Max chooses “scissors,” then Min wins a dollar and Max loses a dollar, so the payment of Min to Max is 1, as the corresponding square of the table confirms. To map rock- paper-scissors to the setting of the minimax theorem, let us take both X and Y be the simplex of distributions over rock; paper; scissors , and let Z be the matrix of Figure 1 f g so that given a pair of distributions x X and y Y, the expected payment of Min 2 2 to Max is f (x; y) = xTZy. The minimax equilibrium of the game thus defined is 1 1 1 (x; y), where x = y = ( 3 ; 3 ; 3 ), which confirms the experience of anyone who was challenged by a random student in a school yard to play this game. Under the uniform strategy, both players receive an expected payoff of 0, and no player can improve their payoff by unilaterally changing their strategy. As such, it is reasonable to expect a EQUILIBRIA, FIXED POINTS, AND COMPUTATIONAL COMPLEXITY 149 rock paper scissors rock 0 1 1 paper 1 0 1 scissors 1 1 0 Figure 1: Rock–paper–scissors population of students to converge to the uniform strategy. After all, rock-paper-scissors is so symmetric that no other behavior would make sense. The question that arises is how reasonable it is to expect minimax equilibrium to arise in more complex games (X; Y; f ), and how computationally burdensome it is to compute this equilibrium. Indeed, these two questions are intimately related as, if it is too burdensome computationally to compute a minimax equilibrium, then it is also unlikely that strategic agents will be able to compute it. After all, their cognitive abilities are also computationally bounded. Shedding light on these important questions, a range of fundamental results have been shown over the past ninety years establishing an intimate relationship between the minimax theorem, mathematical programming, and online learning. Noting that all results we are about to discuss have straightforward analogs in the general case, we will remain consistent with the historical development of these ideas, discussing these n m results in the special case where X = ∆n R and Y = ∆m R are probability simplices, and f (x; y) = xTAy, where A is some n m real matrix. In this case, (1) becomes: (2) min max xTAy = max min xTAy: x ∆n y ∆m y ∆m x ∆n 2 2 2 2 1.1 Minimax Equilibrium, Linear Programming, and Online Learning. It was not too long after the proof of the minimax theorem that Dantzig met von Neumann to show him the Simplex algorithm for linear programming. According to Dantzig [1982], von Neumann quickly stood up and sketched the mathematical theory of linear pro- gramming duality. It is not hard to see that (2) follows from strong linear programming duality. Indeed, one can write simple linear programs to solve both sides of (2), hence, as we came to know some thirty years after the Simplex algorithm, a minimax equilib- rium can be computed in time polynomial in the description of matrix A (Khachiyan [1979]). Interestingly, it was conjectured by Dantzig [1951] and recently proven by Adler [2013] that the minimax theorem and strong linear programming duality are in some sense equivalent: it is not just true that a minimax equilibrium can be computed by solving linear programs, but that any linear program can be solved by finding an equilibrium! As the connections between the minimax theorem and linear programming were sink- ing in, researchers proposed simple dynamics for solving min-max problems by having the Min and Max players of (2) run simple learning procedures in tandem. An early pro- cedure, proposed by G. W. Brown [1951] and shown to converge by Robinson [1951], 150 CONSTANTINOS DASKALAKIS was fictitious play: in every round of this procedure, each player plays a best response to the history of strategies played by her opponent thus far. This simple procedure con- verges to a minimax equilibrium in the following “average sense:” if (xt ; yt )t is the trajectory of strategies traversed by the Min and Max players using fictitious play, then 1 the average payment t P t f (x ; y ) of Min to Max converges to the optimum of (2) as t , albeit this convergenceÄ may be exponentially slow in the number of actions, ! 1 n + m, available to the players, as shown in recent work with Pan (Daskalakis and Pan [2014]).
Recommended publications
  • Constantinos Daskalakis
    The Work of Constantinos Daskalakis Allyn Jackson How long does it take to solve a problem on a computer? This seemingly innocuous question, unanswerable in general, lies at the heart of computa- tional complexity theory. With deep roots in mathematics and logic, this area of research seeks to understand the boundary between which problems are efficiently solvable by computer and which are not. Computational complexity theory is one of the most vibrant and inven- tive branches of computer science, and Constantinos Daskalakis stands out as one of its brightest young lights. His work has transformed understand- ing about the computational complexity of a host of fundamental problems in game theory, markets, auctions, and other economic structures. A wide- ranging curiosity, technical ingenuity, and deep theoretical insight have en- abled Daskalakis to bring profound new perspectives on these problems. His first outstanding achievement grew out of his doctoral dissertation and lies at the border of game theory and computer science. Game the- ory models interactions of rational agents|these could be individual people, businesses, governments, etc.|in situations of conflict, competition, or coop- eration. A foundational concept in game theory is that of Nash equilibrium. A game reaches a Nash equilibrium if each agent adopts the best strategy possible, taking into account the strategies of the other agents, so that there is no incentive for any agent to change strategy. The concept is named after mathematician John F. Nash, who proved in 1950 that all games have Nash equilibria. This result had a profound effect on economics and earned Nash the Nobel Prize in Economic Sciences in 1994.
    [Show full text]
  • How Easy Is Local Search?
    JOURNAL OF COMPUTER AND SYSTEM SCIENCES 37, 79-100 (1988) How Easy Is Local Search? DAVID S. JOHNSON AT & T Bell Laboratories, Murray Hill, New Jersey 07974 CHRISTOS H. PAPADIMITRIOU Stanford University, Stanford, California and National Technical University of Athens, Athens, Greece AND MIHALIS YANNAKAKIS AT & T Bell Maboratories, Murray Hill, New Jersey 07974 Received December 5, 1986; revised June 5, 1987 We investigate the complexity of finding locally optimal solutions to NP-hard com- binatorial optimization problems. Local optimality arises in the context of local search algorithms, which try to find improved solutions by considering perturbations of the current solution (“neighbors” of that solution). If no neighboring solution is better than the current solution, it is locally optimal. Finding locally optimal solutions is presumably easier than finding optimal solutions. Nevertheless, many popular local search algorithms are based on neighborhood structures for which locally optimal solutions are not known to be computable in polynomial time, either by using the local search algorithms themselves or by taking some indirect route. We define a natural class PLS consisting essentially of those local search problems for which local optimality can be verified in polynomial time, and show that there are complete problems for this class. In particular, finding a partition of a graph that is locally optimal with respect to the well-known Kernighan-Lin algorithm for graph partitioning is PLS-complete, and hence can be accomplished in polynomial time only if local optima can be found in polynomial time for all local search problems in PLS. 0 1988 Academic Press, Inc. 1.
    [Show full text]
  • LSPACE VS NP IS AS HARD AS P VS NP 1. Introduction in Complexity
    LSPACE VS NP IS AS HARD AS P VS NP FRANK VEGA Abstract. The P versus NP problem is a major unsolved problem in com- puter science. This consists in knowing the answer of the following question: Is P equal to NP? Another major complexity classes are LSPACE, PSPACE, ESPACE, E and EXP. Whether LSPACE = P is a fundamental question that it is as important as it is unresolved. We show if P = NP, then LSPACE = NP. Consequently, if LSPACE is not equal to NP, then P is not equal to NP. According to Lance Fortnow, it seems that LSPACE versus NP is easier to be proven. However, with this proof we show this problem is as hard as P versus NP. Moreover, we prove the complexity class P is not equal to PSPACE as a direct consequence of this result. Furthermore, we demonstrate if PSPACE is not equal to EXP, then P is not equal to NP. In addition, if E = ESPACE, then P is not equal to NP. 1. Introduction In complexity theory, a function problem is a computational problem where a single output is expected for every input, but the output is more complex than that of a decision problem [6]. A functional problem F is defined as a binary relation (x; y) 2 R over strings of an arbitrary alphabet Σ: R ⊂ Σ∗ × Σ∗: A Turing machine M solves F if for every input x such that there exists a y satisfying (x; y) 2 R, M produces one such y, that is M(x) = y [6].
    [Show full text]
  • Conformance Testing David Lee and Mihalis Yannakakis Bell
    Conformance Testing David Lee and Mihalis Yannakakis Bell Laboratories, Lucent Technologies 600 Mountain Avenue, RM 2C-423 Murray Hill, NJ 07974 - 2 - System reliability can not be overemphasized in software engineering as large and complex systems are being built to fulfill complicated tasks. Consequently, testing is an indispensable part of system design and implementation; yet it has proved to be a formidable task for complex systems. Testing software contains very wide fields with an extensive literature. See the articles in this volume. We discuss testing of software systems that can be modeled by finite state machines or their extensions to ensure that the implementation conforms to the design. A finite state machine contains a finite number of states and produces outputs on state transitions after receiving inputs. Finite state machines are widely used to model software systems such as communication protocols. In a testing problem we have a specification machine, which is a design of a system, and an implementation machine, which is a ‘‘black box’’ for which we can only observe its I/O behavior. The task is to test whether the implementation conforms to the specification. This is called the conformance testing or fault detection problem. A test sequence that solves this problem is called a checking sequence. Testing finite state machines has been studied for a very long time starting with Moore’s seminal 1956 paper on ‘‘gedanken-experiments’’ (31), which introduced the basic framework for testing problems. Among other fundamental problems, Moore posed the conformance testing problem, proposed an approach, and asked for a better solution.
    [Show full text]
  • Xi Chen: Curriculum Vitae
    Xi Chen Associate Professor Phone: 1-212-939-7136 Department of Computer Science Email: [email protected] Columbia University Homepage: http://www.cs.columbia.edu/∼xichen New York, NY 10027 Date of Preparation: March 12, 2016 Research Interests Theoretical Computer Science, including Algorithmic Game Theory and Economics, Complexity Theory, Graph Isomorphism Testing, and Property Testing. Academic Training B.S. Physics / Mathematics, Tsinghua University, Sep 1999 { Jul 2003 Ph.D. Computer Science, Tsinghua University, Sep 2003 { Jul 2007 Advisor: Professor Bo Zhang, Tsinghua University Thesis Title: The Complexity of Two-Player Nash Equilibria Academic Positions Associate Professor (with tenure), Columbia University, Mar 2016 { Now Associate Professor (without tenure), Columbia University, Jul 2015 { Mar 2016 Assistant Professor, Columbia University, Jan 2011 { Jun 2015 Postdoctoral Researcher, Columbia University, Aug 2010 { Dec 2010 Postdoctoral Researcher, University of Southern California, Aug 2009 { Aug 2010 Postdoctoral Researcher, Princeton University, Aug 2008 { Aug 2009 Postdoctoral Researcher, Institute for Advanced Study, Sep 2007 { Aug 2008 Honors and Awards SIAM Outstanding Paper Award, 2016 EATCS Presburger Award, 2015 Alfred P. Sloan Research Fellowship, 2012 NSF CAREER Award, 2012 Best Paper Award The 4th International Frontiers of Algorithmics Workshop, 2010 Best Paper Award The 20th International Symposium on Algorithms and Computation, 2009 Xi Chen 2 Silver Prize, New World Mathematics Award (Ph.D. Thesis) The 4th International Congress of Chinese Mathematicians, 2007 Best Paper Award The 47th Annual IEEE Symposium on Foundations of Computer Science, 2006 Grants Current, Natural Science Foundation, Title: On the Complexity of Optimal Pricing and Mechanism Design, Period: Aug 2014 { Jul 2017, Amount: $449,985.
    [Show full text]
  • More Revenue from Two Samples Via Factor Revealing Sdps
    EC’20 Session 2e: Revenue Maximization More Revenue from Two Samples via Factor Revealing SDPs CONSTANTINOS DASKALAKIS, Massachusetts Institute of Technology MANOLIS ZAMPETAKIS, Massachusetts Institute of Technology We consider the classical problem of selling a single item to a single bidder whose value for the item is drawn from a regular distribution F, in a “data-poor” regime where F is not known to the seller, and very few samples from F are available. Prior work [9] has shown that one sample from F can be used to attain a 1/2-factor approximation to the optimal revenue, but it has been challenging to improve this guarantee when more samples from F are provided, even when two samples from F are provided. In this case, the best approximation known to date is 0:509, achieved by the Empirical Revenue Maximizing (ERM) mechanism [2]. We improve this guarantee to 0:558, and provide a lower bound of 0:65. Our results are based on a general framework, based on factor-revealing Semidefnite Programming relaxations aiming to capture as tight as possible a superset of product measures of regular distributions, the challenge being that neither regularity constraints nor product measures are convex constraints. The framework is general and can be applied in more abstract settings to evaluate the performance of a policy chosen using independent samples from a distribution and applied on a fresh sample from that same distribution. CCS Concepts: • Theory of computation → Algorithmic game theory and mechanism design; Algo- rithmic mechanism design. Additional Key Words and Phrases: revenue maximization, two samples, mechanism design, semidefnite relaxation ACM Reference Format: Constantinos Daskalakis and Manolis Zampetakis.
    [Show full text]
  • The Classes FNP and TFNP
    Outline The classes FNP and TFNP C. Wilson1 1Lane Department of Computer Science and Electrical Engineering West Virginia University Christopher Wilson Function Problems Outline Outline 1 Function Problems defined What are Function Problems? FSAT Defined TSP Defined 2 Relationship between Function and Decision Problems RL Defined Reductions between Function Problems 3 Total Functions Defined Total Functions Defined FACTORING HAPPYNET ANOTHER HAMILTON CYCLE Christopher Wilson Function Problems Outline Outline 1 Function Problems defined What are Function Problems? FSAT Defined TSP Defined 2 Relationship between Function and Decision Problems RL Defined Reductions between Function Problems 3 Total Functions Defined Total Functions Defined FACTORING HAPPYNET ANOTHER HAMILTON CYCLE Christopher Wilson Function Problems Outline Outline 1 Function Problems defined What are Function Problems? FSAT Defined TSP Defined 2 Relationship between Function and Decision Problems RL Defined Reductions between Function Problems 3 Total Functions Defined Total Functions Defined FACTORING HAPPYNET ANOTHER HAMILTON CYCLE Christopher Wilson Function Problems Function Problems What are Function Problems? Function Problems FSAT Defined Total Functions TSP Defined Outline 1 Function Problems defined What are Function Problems? FSAT Defined TSP Defined 2 Relationship between Function and Decision Problems RL Defined Reductions between Function Problems 3 Total Functions Defined Total Functions Defined FACTORING HAPPYNET ANOTHER HAMILTON CYCLE Christopher Wilson Function Problems Function
    [Show full text]
  • Lecture: Complexity of Finding a Nash Equilibrium 1 Computational
    Algorithmic Game Theory Lecture Date: September 20, 2011 Lecture: Complexity of Finding a Nash Equilibrium Lecturer: Christos Papadimitriou Scribe: Miklos Racz, Yan Yang In this lecture we are going to talk about the complexity of finding a Nash Equilibrium. In particular, the class of problems NP is introduced and several important subclasses are identified. Most importantly, we are going to prove that finding a Nash equilibrium is PPAD-complete (defined in Section 2). For an expository article on the topic, see [4], and for a more detailed account, see [5]. 1 Computational complexity For us the best way to think about computational complexity is that it is about search problems.Thetwo most important classes of search problems are P and NP. 1.1 The complexity class NP NP stands for non-deterministic polynomial. It is a class of problems that are at the core of complexity theory. The classical definition is in terms of yes-no problems; here, we are concerned with the search problem form of the definition. Definition 1 (Complexity class NP). The class of all search problems. A search problem A is a binary predicate A(x, y) that is efficiently (in polynomial time) computable and balanced (the length of x and y do not differ exponentially). Intuitively, x is an instance of the problem and y is a solution. The search problem for A is this: “Given x,findy such that A(x, y), or if no such y exists, say “no”.” The class of all search problems is called NP. Examples of NP problems include the following two.
    [Show full text]
  • Testing Ising Models
    Testing Ising Models The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation Daskalakis, Constantinos et al. “Testing Ising Models.” IEEE Transactions on Information Theory, 65, 11 (November 2019): 6829 - 6852 © 2019 The Author(s) As Published 10.1109/TIT.2019.2932255 Publisher Institute of Electrical and Electronics Engineers (IEEE) Version Author's final manuscript Citable link https://hdl.handle.net/1721.1/129990 Terms of Use Creative Commons Attribution-Noncommercial-Share Alike Detailed Terms http://creativecommons.org/licenses/by-nc-sa/4.0/ Testing Ising Models∗ § Constantinos Daskalakis† Nishanth Dikkala‡ Gautam Kamath July 12, 2019 Abstract Given samples from an unknown multivariate distribution p, is it possible to distinguish whether p is the product of its marginals versus p being far from every product distribution? Similarly, is it possible to distinguish whether p equals a given distribution q versus p and q being far from each other? These problems of testing independence and goodness-of-fit have received enormous attention in statistics, information theory, and theoretical computer science, with sample-optimal algorithms known in several interesting regimes of parameters. Unfortu- nately, it has also been understood that these problems become intractable in large dimensions, necessitating exponential sample complexity. Motivated by the exponential lower bounds for general distributions as well as the ubiquity of Markov Random Fields (MRFs) in the modeling of high-dimensional distributions, we initiate the study of distribution testing on structured multivariate distributions, and in particular the prototypical example of MRFs: the Ising Model. We demonstrate that, in this structured setting, we can avoid the curse of dimensionality, obtaining sample and time efficient testers for independence and goodness-of-fit.
    [Show full text]
  • The Relative Complexity of NP Search Problems
    Journal of Computer and System Sciences 57, 319 (1998) Article No. SS981575 The Relative Complexity of NP Search Problems Paul Beame* Computer Science and Engineering, University of Washington, Box 352350, Seattle, Washington 98195-2350 E-mail: beameÄcs.washington.edu Stephen Cook- Computer Science Department, University of Toronto, Canada M5S 3G4 E-mail: sacookÄcs.toronto.edu Jeff Edmonds Department of Computer Science, York University, Toronto, Ontario, Canada M3J 1P3 E-mail: jeffÄcs.yorku.ca Russell Impagliazzo9 Computer Science and Engineering, UC, San Diego, 9500 Gilman Drive, La Jolla, California 92093-0114 E-mail: russellÄcs.ucsd.edu and Toniann PitassiÄ Department of Computer Science, University of Arizona, Tucson, Arizona 85721-0077 E-mail: toniÄcs.arizona.edu Received January 1998 1. INTRODUCTION Papadimitriou introduced several classes of NP search problems based on combinatorial principles which guarantee the existence of In the study of computational complexity, there are many solutions to the problems. Many interesting search problems not known to be solvable in polynomial time are contained in these classes, problems that are naturally expressed as problems ``to find'' and a number of them are complete problems. We consider the question but are converted into decision problems to fit into standard of the relative complexity of these search problem classes. We prove complexity classes. For example, a more natural problem several separations which show that in a generic relativized world the than determining whether or not a graph is 3-colorable search classes are distinct and there is a standard search problem in might be that of finding a 3-coloring of the graph if it exists.
    [Show full text]
  • Total Functions in the Polynomial Hierarchy 11
    Electronic Colloquium on Computational Complexity, Report No. 153 (2020) TOTAL FUNCTIONS IN THE POLYNOMIAL HIERARCHY ROBERT KLEINBERG∗, DANIEL MITROPOLSKYy, AND CHRISTOS PAPADIMITRIOUy Abstract. We identify several genres of search problems beyond NP for which existence of solutions is guaranteed. One class that seems especially rich in such problems is PEPP (for “polynomial empty pigeonhole principle”), which includes problems related to existence theorems proved through the union bound, such as finding a bit string that is far from all codewords, finding an explicit rigid matrix, as well as a problem we call Complexity, capturing Complexity Theory’s quest. When the union bound is generous, in that solutions constitute at least a polynomial fraction of the domain, we have a family of seemingly weaker classes α-PEPP, which are inside FPNPjpoly. Higher in the hierarchy, we identify the constructive version of the Sauer- Shelah lemma and the appropriate generalization of PPP that contains it. The resulting total function hierarchy turns out to be more stable than the polynomial hierarchy: it is known that, under oracles, total functions within FNP may be easy, but total functions a level higher may still be harder than FPNP. 1. Introduction The complexity of total functions has emerged over the past three decades as an intriguing and productive branch of Complexity Theory. Subclasses of TFNP, the set of all total functions in FNP, have been defined and studied: PLS, PPP, PPA, PPAD, PPADS, and CLS. These classes are replete with natural problems, several of which turned out to be complete for the corresponding class, see e.g.
    [Show full text]
  • Optimisation Function Problems FNP and FP
    Complexity Theory 99 Complexity Theory 100 Optimisation The Travelling Salesman Problem was originally conceived of as an This is still reasonable, as we are establishing the difficulty of the optimisation problem problems. to find a minimum cost tour. A polynomial time solution to the optimisation version would give We forced it into the mould of a decision problem { TSP { in order a polynomial time solution to the decision problem. to fit it into our theory of NP-completeness. Also, a polynomial time solution to the decision problem would allow a polynomial time algorithm for finding the optimal value, CLIQUE Similar arguments can be made about the problems and using binary search, if necessary. IND. Anuj Dawar May 21, 2007 Anuj Dawar May 21, 2007 Complexity Theory 101 Complexity Theory 102 Function Problems FNP and FP Still, there is something interesting to be said for function problems A function which, for any given Boolean expression φ, gives a arising from NP problems. satisfying truth assignment if φ is satisfiable, and returns \no" Suppose otherwise, is a witness function for SAT. L = fx j 9yR(x; y)g If any witness function for SAT is computable in polynomial time, where R is a polynomially-balanced, polynomial time decidable then P = NP. relation. If P = NP, then for every language in NP, some witness function is A witness function for L is any function f such that: computable in polynomial time, by a binary search algorithm. • if x 2 L, then f(x) = y for some y such that R(x; y); P = NP if, and only if, FNP = FP • f(x) = \no" otherwise.
    [Show full text]