Limits to Parallel Computation: P-Completeness Theory

Total Page:16

File Type:pdf, Size:1020Kb

Limits to Parallel Computation: P-Completeness Theory Limits to Parallel Computation: P-Completeness Theory RAYMOND GREENLAW University of New Hampshire H. JAMES HOOVER University of Alberta WALTER L. RUZZO University of Washington New York Oxford OXFORD UNIVERSITY PRESS 1995 This book is dedicated to our families, who already know that life is inherently sequential. Preface This book is an introduction to the rapidly growing theory of P- completeness — the branch of complexity theory that focuses on identifying the “hardest” problems in the class P of problems solv- able in polynomial time. P-complete problems are of interest because they all appear to lack highly parallel solutions. That is, algorithm designers have failed to find NC algorithms, feasible highly parallel solutions that take time polynomial in the logarithm of the problem size while using only a polynomial number of processors, for them. Consequently, the promise of parallel computation, namely that ap- plying more processors to a problem can greatly speed its solution, appears to be broken by the entire class of P-complete problems. This state of affairs is succinctly expressed as the following question: Does P equal NC ? Organization of the Material The book is organized into two parts: an introduction to P- completeness theory, and a catalog of P-complete and open prob- lems. The first part of the book is a thorough introduction to the theory of P-completeness. We begin with an informal introduction. Then we discuss the major parallel models of computation, describe the classes NC and P, and present the notions of reducibility and com- pleteness. We subsequently introduce some fundamental P-complete problems, followed by evidence suggesting why NC does not equal P. Next, we discuss in detail the primary P-complete problem, that of evaluating a Boolean circuit. Following this we examine several se- quential paradigms and their parallel versions. We describe a model for classifying algorithms as inherently sequential. We finish Part I with some general conclusions about practical parallel computation. viii PREFACE Because of the broad range of topics in the rapidly growing field of parallel computation, we are unable to provide a detailed treatment of several related topics. For example, we are unable to discuss parallel algorithm design and development in detail. For important and broad topics like this, we provide the reader with some references to the available literature. The second part of the book provides a comprehensive catalog of P-complete problems, including several unpublished and new P- completeness results. For each problem we try to provide the essen- tial idea underlying its P-complete reduction, and as much informa- tion and additional references about related problems as possible. In addition to the P-complete problems catalog, we provide a list of open problems, a list of problems in the class CC , and a list of problems in the class RNC . Using the Book in a Course The book has been designed so that it will be suitable as a text for a one semester graduate course covering topics in parallel computation. The first part of the book provides introductory material and the problems can easily be converted into numerous exercises. The book would be ideal for use in a seminar course focusing on P-completeness theory. It can also be used as a supplementary text for a graduate level course in complexity theory. Several of the chapters in the book could be covered in a graduate level course or the book could be used as a reference for courses in the theory of computation. This work could also be used as a rich source of sample problems for a variety of different courses. For the motivated student or researcher interested in learning about P-completeness, the book can be used effectively for self study. Additional Problems and Corrections In producing this book over a period of many years we have tried to be as accurate and thorough as possible. Undoubtedly, there are still some errors and omissions in our lists of problems. In anticipation of possible future printings, we would like to correct these errors and in- corporate additional problems. We welcome suggestions, corrections, new problems, and further references. For each of the P-complete problems, we are also interested in references to papers that provide PREFACE ix the best known parallel algorithms for the problem. Please send general comments, corrections, new problems, bibli- ography entries, and/or copies of the relevant papers to Ray Greenlaw at [email protected] Corrigenda and additions to this work can by found in the directory pub/hoover/P-complete at ftp.cs.ualberta.ca and via the World Wide Web in Jim Hoover’s home page at http://www.cs.ualberta.ca/ Research Support The authors gratefully acknowledge financial support from the fol- lowing organizations: Ray Greenlaw’s research was supported in part by the National Science Foundation grant CCR-9209184. Jim Hoover’s research was supported by the Natural Sciences and Engineering Research Council of Canada grant OGP 38937. Larry Ruzzo’s research was supported in part by NSF grants ECS-8306622, CCR-8703196, CCR-9002891, and by NSF/DARPA grant CCR-8907960. A portion of this work was performed while visiting the University of Toronto, whose hospitality is also gratefully acknowledged. Acknowledgments An extra special thanks to Martin Tompa who helped keep this project alive when it was close to dying. Martin has provided us with many useful suggestions over the years — all of which have helped to make this a better book. A special thanks to Anne Condon and Alex Sch¨afferfor carefully reading drafts of the book. Their suggestions helped to improve specific sections. We wish to thank the following for contributing new problems, pointing us to appropriate literature, and providing us with cor- rections and suggestions on drafts of the manuscript: Richard An- derson, Jos´eBalc´azar,David Barrington, Paul Beame, Erik Brisson, Anne Condon, Stephen Cook, Derek Corneil, Pilar de la Torre, Larry Denenberg, Sergio De Agostino, John Ellis, Arvind Gupta, John x PREFACE Hershberger, David Johnson, Marek Karpinski, Tracy Kimbrel, Dex- ter Kozen, Luc Longpr´e,Mike Luby, Jon Machta, Andrew Malton, Pierre McKenzie, Satoru Miyano, Christos Papadimitriou, Teresa Przytycka, John Reif, Alex Sch¨affer,Roger Simons, Jack Snoeyink, Paul Spirakis, Iain Stewart, Larry Stockmeyer, Ashok Subramanian, Eva Tardos, Shang-Hua Teng, Martin Tompa, H. Venkateswaran, Joachim von zur Gathen, and Thomas Zeugmann. We want to thank the reviewers who provided us with general comments about the organization of the book as well as detailed comments. We have tried to incorporate as many of their suggestions as possible although not all of them were feasible for us. We want to thank Don Jackson at Oxford University Press. Don has been very helpful at each step in the production of this book and has been a great person to work with. Finally, we would like to thank Mike Garey and David Johnson whose book Computers and Intractability: A Guide to the Theory of NP-completeness [113] has served as a model for this work. November 1994 Ray Greenlaw Jim Hoover Larry Ruzzo PREFACE xi Authors’ Addresses Raymond Greenlaw Department of Computer Science University of New Hampshire Durham, NH 03824 e-mail address: [email protected] world wide web home page: http://www.cs.unh.edu/ H. James Hoover Department of Computing Science University of Alberta Edmonton, Alberta, Canada T6G 2H1 e-mail address: [email protected] world wide web home page: http://www.cs.ualberta.ca/ Walter L. Ruzzo Department of Computer Science and Engineering, FR-35 University of Washington Seattle, WA 98195 e-mail address: [email protected] world wide web home page: http://www.cs.washington.edu/ Contents Preface vii Part I: Background and Theory 1 1 Introduction 3 1.1 Bandersnatch Design 3 1.2 Informal Background 8 1.2.1 Feasible, Highly Parallel, and Inherently Sequential Problems 8 1.2.2 Why is P-Completeness Important? 11 1.3 Some History 13 1.4 Related Works 16 1.5 Overview of This Book 17 2 Parallel Models of Computation 19 2.1 Introduction 19 2.2 The PRAM Model 21 2.3 The Boolean Circuit Model 26 2.3.1 Uniform Circuit Families 30 2.4 Circuits and PRAMs 33 3 Complexity 38 3.1 Search and Decision Problems 38 3.2 Complexity Classes 41 3.2.1 P, NC , FP, and FNC 44 3.2.2 The Classes NC k and FNC k 45 3.2.3 Random NC 46 3.3 Reducibility 46 3.3.1 Many-one Reducibility 47 xiv CONTENTS 3.3.2 Turing Reducibility 49 3.4 Other NC Compatible Reducibilities 52 3.4.1 NC k Turing Reducibility 52 3.4.2 Logarithmic Space Reducibility 54 3.5 Completeness 54 4 Two Basic P -Complete Problems 57 4.1 The Generic P-Complete Problem 57 4.2 The Circuit Value Problem 59 5 Evidence That NC Does Not Equal P 61 5.1 Introduction 61 5.2 General Simulations Are Not Fast 62 5.3 Fast Simulations Are Not General 63 5.4 Natural Approaches Provably Fail 68 5.5 Summary 69 6 The Circuit Value Problem 71 6.1 The Circuit Value Problem Is P-Complete 71 6.2 Restricted Versions of Circuit Value 75 7 Greedy Algorithms 87 7.1 Lexicographic Greedy Algorithms 87 7.2 Generic Greedy Algorithms 90 8 P -Complete Algorithms 94 8.1 Introduction 94 8.2 Inherently Sequential Algorithms 96 8.3 Applications of the Model 98 9 Two Other Notions of P -Completeness 100 9.1 Strong P-Completeness 100 9.2 Strict P-Completeness 103 10 Approximating P -Complete Problems 108 10.1 Introduction 108 10.2 Approximating LFMIS Is Hard 109 10.3 Approximation Schemes 111 11 Closing Remarks 114 CONTENTS xv Part II: A Compendium of Problems
Recommended publications
  • Database Theory
    DATABASE THEORY Lecture 4: Complexity of FO Query Answering Markus Krotzsch¨ TU Dresden, 21 April 2016 Overview 1. Introduction | Relational data model 2. First-order queries 3. Complexity of query answering 4. Complexity of FO query answering 5. Conjunctive queries 6. Tree-like conjunctive queries 7. Query optimisation 8. Conjunctive Query Optimisation / First-Order Expressiveness 9. First-Order Expressiveness / Introduction to Datalog 10. Expressive Power and Complexity of Datalog 11. Optimisation and Evaluation of Datalog 12. Evaluation of Datalog (2) 13. Graph Databases and Path Queries 14. Outlook: database theory in practice See course homepage [) link] for more information and materials Markus Krötzsch, 21 April 2016 Database Theory slide 2 of 41 How to Measure Query Answering Complexity Query answering as decision problem { consider Boolean queries Various notions of complexity: • Combined complexity (complexity w.r.t. size of query and database instance) • Data complexity (worst case complexity for any fixed query) • Query complexity (worst case complexity for any fixed database instance) Various common complexity classes: L ⊆ NL ⊆ P ⊆ NP ⊆ PSpace ⊆ ExpTime Markus Krötzsch, 21 April 2016 Database Theory slide 3 of 41 An Algorithm for Evaluating FO Queries function Eval(', I) 01 switch (') f I 02 case p(c1, ::: , cn): return hc1, ::: , cni 2 p 03 case : : return :Eval( , I) 04 case 1 ^ 2 : return Eval( 1, I) ^ Eval( 2, I) 05 case 9x. : 06 for c 2 ∆I f 07 if Eval( [x 7! c], I) then return true 08 g 09 return false 10 g Markus Krötzsch, 21 April 2016 Database Theory slide 4 of 41 FO Algorithm Worst-Case Runtime Let m be the size of ', and let n = jIj (total table sizes) • How many recursive calls of Eval are there? { one per subexpression: at most m • Maximum depth of recursion? { bounded by total number of calls: at most m • Maximum number of iterations of for loop? { j∆Ij ≤ n per recursion level { at most nm iterations I • Checking hc1, ::: , cni 2 p can be done in linear time w.r.t.
    [Show full text]
  • Interactive Proof Systems and Alternating Time-Space Complexity
    Theoretical Computer Science 113 (1993) 55-73 55 Elsevier Interactive proof systems and alternating time-space complexity Lance Fortnow” and Carsten Lund** Department of Computer Science, Unicersity of Chicago. 1100 E. 58th Street, Chicago, IL 40637, USA Abstract Fortnow, L. and C. Lund, Interactive proof systems and alternating time-space complexity, Theoretical Computer Science 113 (1993) 55-73. We show a rough equivalence between alternating time-space complexity and a public-coin interactive proof system with the verifier having a polynomial-related time-space complexity. Special cases include the following: . All of NC has interactive proofs, with a log-space polynomial-time public-coin verifier vastly improving the best previous lower bound of LOGCFL for this model (Fortnow and Sipser, 1988). All languages in P have interactive proofs with a polynomial-time public-coin verifier using o(log’ n) space. l All exponential-time languages have interactive proof systems with public-coin polynomial-space exponential-time verifiers. To achieve better bounds, we show how to reduce a k-tape alternating Turing machine to a l-tape alternating Turing machine with only a constant factor increase in time and space. 1. Introduction In 1981, Chandra et al. [4] introduced alternating Turing machines, an extension of nondeterministic computation where the Turing machine can make both existential and universal moves. In 1985, Goldwasser et al. [lo] and Babai [l] introduced interactive proof systems, an extension of nondeterministic computation consisting of two players, an infinitely powerful prover and a probabilistic polynomial-time verifier. The prover will try to convince the verifier of the validity of some statement.
    [Show full text]
  • Complexity Theory Lecture 9 Co-NP Co-NP-Complete
    Complexity Theory 1 Complexity Theory 2 co-NP Complexity Theory Lecture 9 As co-NP is the collection of complements of languages in NP, and P is closed under complementation, co-NP can also be characterised as the collection of languages of the form: ′ L = x y y <p( x ) R (x, y) { |∀ | | | | → } Anuj Dawar University of Cambridge Computer Laboratory NP – the collection of languages with succinct certificates of Easter Term 2010 membership. co-NP – the collection of languages with succinct certificates of http://www.cl.cam.ac.uk/teaching/0910/Complexity/ disqualification. Anuj Dawar May 14, 2010 Anuj Dawar May 14, 2010 Complexity Theory 3 Complexity Theory 4 NP co-NP co-NP-complete P VAL – the collection of Boolean expressions that are valid is co-NP-complete. Any language L that is the complement of an NP-complete language is co-NP-complete. Any of the situations is consistent with our present state of ¯ knowledge: Any reduction of a language L1 to L2 is also a reduction of L1–the complement of L1–to L¯2–the complement of L2. P = NP = co-NP • There is an easy reduction from the complement of SAT to VAL, P = NP co-NP = NP = co-NP • ∩ namely the map that takes an expression to its negation. P = NP co-NP = NP = co-NP • ∩ VAL P P = NP = co-NP ∈ ⇒ P = NP co-NP = NP = co-NP • ∩ VAL NP NP = co-NP ∈ ⇒ Anuj Dawar May 14, 2010 Anuj Dawar May 14, 2010 Complexity Theory 5 Complexity Theory 6 Prime Numbers Primality Consider the decision problem PRIME: Another way of putting this is that Composite is in NP.
    [Show full text]
  • On the Randomness Complexity of Interactive Proofs and Statistical Zero-Knowledge Proofs*
    On the Randomness Complexity of Interactive Proofs and Statistical Zero-Knowledge Proofs* Benny Applebaum† Eyal Golombek* Abstract We study the randomness complexity of interactive proofs and zero-knowledge proofs. In particular, we ask whether it is possible to reduce the randomness complexity, R, of the verifier to be comparable with the number of bits, CV , that the verifier sends during the interaction. We show that such randomness sparsification is possible in several settings. Specifically, unconditional sparsification can be obtained in the non-uniform setting (where the verifier is modelled as a circuit), and in the uniform setting where the parties have access to a (reusable) common-random-string (CRS). We further show that constant-round uniform protocols can be sparsified without a CRS under a plausible worst-case complexity-theoretic assumption that was used previously in the context of derandomization. All the above sparsification results preserve statistical-zero knowledge provided that this property holds against a cheating verifier. We further show that randomness sparsification can be applied to honest-verifier statistical zero-knowledge (HVSZK) proofs at the expense of increasing the communica- tion from the prover by R−F bits, or, in the case of honest-verifier perfect zero-knowledge (HVPZK) by slowing down the simulation by a factor of 2R−F . Here F is a new measure of accessible bit complexity of an HVZK proof system that ranges from 0 to R, where a maximal grade of R is achieved when zero- knowledge holds against a “semi-malicious” verifier that maliciously selects its random tape and then plays honestly.
    [Show full text]
  • On Uniformity Within NC
    On Uniformity Within NC David A Mix Barrington Neil Immerman HowardStraubing University of Massachusetts University of Massachusetts Boston Col lege Journal of Computer and System Science Abstract In order to study circuit complexity classes within NC in a uniform setting we need a uniformity condition which is more restrictive than those in common use Twosuch conditions stricter than NC uniformity RuCo have app eared in recent research Immermans families of circuits dened by rstorder formulas ImaImb and a unifor mity corresp onding to Buss deterministic logtime reductions Bu We show that these two notions are equivalent leading to a natural notion of uniformity for lowlevel circuit complexity classes Weshow that recent results on the structure of NC Ba still hold true in this very uniform setting Finallyweinvestigate a parallel notion of uniformity still more restrictive based on the regular languages Here we givecharacterizations of sub classes of the regular languages based on their logical expressibility extending recentwork of Straubing Therien and Thomas STT A preliminary version of this work app eared as BIS Intro duction Circuit Complexity Computer scientists have long tried to classify problems dened as Bo olean predicates or functions by the size or depth of Bo olean circuits needed to solve them This eort has Former name David A Barrington Supp orted by NSF grant CCR Mailing address Dept of Computer and Information Science U of Mass Amherst MA USA Supp orted by NSF grants DCR and CCR Mailing address Dept of
    [Show full text]
  • Valuative Characterization of Central Extensions of Algebraic Tori on Krull
    Valuative characterization of central extensions of algebraic tori on Krull domains ∗ Haruhisa Nakajima † Department of Mathematics, J. F. Oberlin University Tokiwa-machi, Machida, Tokyo 194-0294, JAPAN Abstract Let G be an affine algebraic group with an algebraic torus G0 over an alge- braically closed field K of an arbitrary characteristic p. We show a criterion for G to be a finite central extension of G0 in terms of invariant theory of all regular 0 actions of any closed subgroup H containing ZG(G ) on affine Krull K-schemes such that invariant rational functions are locally fractions of invariant regular func- tions. Consider an affine Krull H-scheme X = Spec(R) and a prime ideal P of R with ht(P) = ht(P ∩ RH ) = 1. Let I(P) denote the inertia group of P un- der the action of H. The group G is central over G0 if and only if the fraction e(P, P ∩ RH )/e(P, P ∩ RI(P)) of ramification indices is equal to 1 (p = 0) or to the p-part of the order of the group of weights of G0 on RI(P)) vanishing on RI(P))/P ∩ RI(P) (p> 0) for an arbitrary X and P. MSC: primary 13A50, 14R20, 20G05; secondary 14L30, 14M25 Keywords: Krull domain; ramification index; algebraic group; algebraic torus; char- acter group; invariant theory 1 Introduction 1.A. We consider affine algebraic groups and affine schemes over a fixed algebraically closed field K of an arbitrary characteristic p. For an affine group G, denote by G0 its identity component.
    [Show full text]
  • The Complexity of Space Bounded Interactive Proof Systems
    The Complexity of Space Bounded Interactive Proof Systems ANNE CONDON Computer Science Department, University of Wisconsin-Madison 1 INTRODUCTION Some of the most exciting developments in complexity theory in recent years concern the complexity of interactive proof systems, defined by Goldwasser, Micali and Rackoff (1985) and independently by Babai (1985). In this paper, we survey results on the complexity of space bounded interactive proof systems and their applications. An early motivation for the study of interactive proof systems was to extend the notion of NP as the class of problems with efficient \proofs of membership". Informally, a prover can convince a verifier in polynomial time that a string is in an NP language, by presenting a witness of that fact to the verifier. Suppose that the power of the verifier is extended so that it can flip coins and can interact with the prover during the course of a proof. In this way, a verifier can gather statistical evidence that an input is in a language. As we will see, the interactive proof system model precisely captures this in- teraction between a prover P and a verifier V . In the model, the computation of V is probabilistic, but is typically restricted in time or space. A language is accepted by the interactive proof system if, for all inputs in the language, V accepts with high probability, based on the communication with the \honest" prover P . However, on inputs not in the language, V rejects with high prob- ability, even when communicating with a \dishonest" prover. In the general model, V can keep its coin flips secret from the prover.
    [Show full text]
  • NP-Completeness: Reductions Tue, Nov 21, 2017
    CMSC 451 Dave Mount CMSC 451: Lecture 19 NP-Completeness: Reductions Tue, Nov 21, 2017 Reading: Chapt. 8 in KT and Chapt. 8 in DPV. Some of the reductions discussed here are not in either text. Recap: We have introduced a number of concepts on the way to defining NP-completeness: Decision Problems/Language recognition: are problems for which the answer is either yes or no. These can also be thought of as language recognition problems, assuming that the input has been encoded as a string. For example: HC = fG j G has a Hamiltonian cycleg MST = f(G; c) j G has a MST of cost at most cg: P: is the class of all decision problems which can be solved in polynomial time. While MST 2 P, we do not know whether HC 2 P (but we suspect not). Certificate: is a piece of evidence that allows us to verify in polynomial time that a string is in a given language. For example, the language HC above, a certificate could be a sequence of vertices along the cycle. (If the string is not in the language, the certificate can be anything.) NP: is defined to be the class of all languages that can be verified in polynomial time. (Formally, it stands for Nondeterministic Polynomial time.) Clearly, P ⊆ NP. It is widely believed that P 6= NP. To define NP-completeness, we need to introduce the concept of a reduction. Reductions: The class of NP-complete problems consists of a set of decision problems (languages) (a subset of the class NP) that no one knows how to solve efficiently, but if there were a polynomial time solution for even a single NP-complete problem, then every problem in NP would be solvable in polynomial time.
    [Show full text]
  • On the NP-Completeness of the Minimum Circuit Size Problem
    On the NP-Completeness of the Minimum Circuit Size Problem John M. Hitchcock∗ A. Pavany Department of Computer Science Department of Computer Science University of Wyoming Iowa State University Abstract We study the Minimum Circuit Size Problem (MCSP): given the truth-table of a Boolean function f and a number k, does there exist a Boolean circuit of size at most k computing f? This is a fundamental NP problem that is not known to be NP-complete. Previous work has studied consequences of the NP-completeness of MCSP. We extend this work and consider whether MCSP may be complete for NP under more powerful reductions. We also show that NP-completeness of MCSP allows for amplification of circuit complexity. We show the following results. • If MCSP is NP-complete via many-one reductions, the following circuit complexity amplifi- Ω(1) cation result holds: If NP\co-NP requires 2n -size circuits, then ENP requires 2Ω(n)-size circuits. • If MCSP is NP-complete under truth-table reductions, then EXP 6= NP \ SIZE(2n ) for some > 0 and EXP 6= ZPP. This result extends to polylog Turing reductions. 1 Introduction Many natural NP problems are known to be NP-complete. Ladner's theorem [14] tells us that if P is different from NP, then there are NP-intermediate problems: problems that are in NP, not in P, but also not NP-complete. The examples arising out of Ladner's theorem come from diagonalization and are not natural. A canonical candidate example of a natural NP-intermediate problem is the Graph Isomorphism (GI) problem.
    [Show full text]
  • Week 1: an Overview of Circuit Complexity 1 Welcome 2
    Topics in Circuit Complexity (CS354, Fall’11) Week 1: An Overview of Circuit Complexity Lecture Notes for 9/27 and 9/29 Ryan Williams 1 Welcome The area of circuit complexity has a long history, starting in the 1940’s. It is full of open problems and frontiers that seem insurmountable, yet the literature on circuit complexity is fairly large. There is much that we do know, although it is scattered across several textbooks and academic papers. I think now is a good time to look again at circuit complexity with fresh eyes, and try to see what can be done. 2 Preliminaries An n-bit Boolean function has domain f0; 1gn and co-domain f0; 1g. At a high level, the basic question asked in circuit complexity is: given a collection of “simple functions” and a target Boolean function f, how efficiently can f be computed (on all inputs) using the simple functions? Of course, efficiency can be measured in many ways. The most natural measure is that of the “size” of computation: how many copies of these simple functions are necessary to compute f? Let B be a set of Boolean functions, which we call a basis set. The fan-in of a function g 2 B is the number of inputs that g takes. (Typical choices are fan-in 2, or unbounded fan-in, meaning that g can take any number of inputs.) We define a circuit C with n inputs and size s over a basis B, as follows. C consists of a directed acyclic graph (DAG) of s + n + 2 nodes, with n sources and one sink (the sth node in some fixed topological order on the nodes).
    [Show full text]
  • Computational Complexity: a Modern Approach
    i Computational Complexity: A Modern Approach Draft of a book: Dated January 2007 Comments welcome! Sanjeev Arora and Boaz Barak Princeton University [email protected] Not to be reproduced or distributed without the authors’ permission This is an Internet draft. Some chapters are more finished than others. References and attributions are very preliminary and we apologize in advance for any omissions (but hope you will nevertheless point them out to us). Please send us bugs, typos, missing references or general comments to [email protected] — Thank You!! DRAFT ii DRAFT Chapter 9 Complexity of counting “It is an empirical fact that for many combinatorial problems the detection of the existence of a solution is easy, yet no computationally efficient method is known for counting their number.... for a variety of problems this phenomenon can be explained.” L. Valiant 1979 The class NP captures the difficulty of finding certificates. However, in many contexts, one is interested not just in a single certificate, but actually counting the number of certificates. This chapter studies #P, (pronounced “sharp p”), a complexity class that captures this notion. Counting problems arise in diverse fields, often in situations having to do with estimations of probability. Examples include statistical estimation, statistical physics, network design, and more. Counting problems are also studied in a field of mathematics called enumerative combinatorics, which tries to obtain closed-form mathematical expressions for counting problems. To give an example, in the 19th century Kirchoff showed how to count the number of spanning trees in a graph using a simple determinant computation. Results in this chapter will show that for many natural counting problems, such efficiently computable expressions are unlikely to exist.
    [Show full text]
  • Mapping Reducibility and Rice's Theorem
    6.045: Automata, Computability, and Complexity Or, Great Ideas in Theoretical Computer Science Spring, 2010 Class 9 Nancy Lynch Today • Mapping reducibility and Rice’s Theorem • We’ve seen several undecidability proofs. • Today we’ll extract some of the key ideas of those proofs and present them as general, abstract definitions and theorems. • Two main ideas: – A formal definition of reducibility from one language to another. Captures many of the reduction arguments we have seen. – Rice’s Theorem, a general theorem about undecidability of properties of Turing machine behavior (or program behavior). Today • Mapping reducibility and Rice’s Theorem • Topics: – Computable functions. – Mapping reducibility, ≤m – Applications of ≤m to show undecidability and non- recognizability of languages. – Rice’s Theorem – Applications of Rice’s Theorem • Reading: – Sipser Section 5.3, Problems 5.28-5.30. Computable Functions Computable Functions • These are needed to define mapping reducibility, ≤m. • Definition: A function f: Σ1* →Σ2* is computable if there is a Turing machine (or program) such that, for every w in Σ1*, M on input w halts with just f(w) on its tape. • To be definite, use basic TM model, except replace qacc and qrej states with one qhalt state. • So far in this course, we’ve focused on accept/reject decisions, which let TMs decide language membership. • That’s the same as computing functions from Σ* to { accept, reject }. • Now generalize to compute functions that produce strings. Total vs. partial computability • We require f to be total = defined for every string. • Could also define partial computable (= partial recursive) functions, which are defined on some subset of Σ1*.
    [Show full text]