The Complexity of Decision Problems in Automata Theory and Logic By

Total Page:16

File Type:pdf, Size:1020Kb

The Complexity of Decision Problems in Automata Theory and Logic By The Complexity of Decision Problems in Automata Theory and Logic by Larry J. Stockmeyer ABSTRACT The inherent computational complexity of a variety of decision problems in mathematical logic and the theory of automata is analyzed in terms of Turing machine time and space and in terms of the complexity of Boolean networks. The problem of deciding whether a star-free expression (a variation of the regular expressions of Kleene used to describe languages accepted by finite automata) defines the empty set is shown to require time and space exceeding any composition of.functions exponential in the length of expressions. In particular, this decision problem is not elementary- recursive in the sense of Kalmar. The emptiness problem can be,reduced efficiently to decision problems for truth or satisfiability of sentences in the first order monadic theory of (N,<), the first order theory of linear orders, and the first order theory of two successors and prefix, among others. It follows that the decision problems for these theories are also not elementary-recursive. The number of Boolean operations and hence the size of logical circuits required to decide truth in several familiar logical theories of sentences only a few hundred characters long is shown to exceed the number of protons required to fill the known universe. The methods of proof are analogous to the arithmetizations and reducibility arguments of recursive function theory. Keywords: computational complexity, decision procedure star-free, Turing machine AM.(MOS) Subject Classification Scheme (1970) primary 68A20, 02G05 secondary 68A40, 94820 Table of Contents 1. Introduction 2, The Model of Computation 2.1 The Basic Model 2.2 A Technically Useful Model 3. Efficient Reduqibility 3,1 Definitions 3.2 Applications to Complexity Bounds 3.3 Other Applications 4. Regular-Like Expressions 4.1 Expressions With Squaring 4.2 Expressions With Complementation 4.3 (deleted) 4.4 Expressions Over a One-Letter Alphabet 5. Nonelementary Logical Theories 6. Complexity of Finite Problems 6.1 Second Order Theory of Successor 6.2 First Order Integer Arithmetic 7. Conclusion Bibliography Appendix I, Notation Appendix 11. Some Properties of logspace List of Figures Figure 4.1: E2 "matches" a word w Figure 6.1: P, B, and d Figure 6.2: Illustrating the proof . of Leuuna 6.5.2 (i) and (ii) 19 6 Figure 6.3: I and J "code" a circuit 19 8 Figure 6.4: The circuit Co Chapter 1. Introduction One major goal of computational complexity is to achieve the ability to characterize precisely the amount of computational resource needed to solve given computational problems or classes of problems. Two important kinds of computational resource are time and space, respectively the number of basic computational steps and the amount of memory used in solving the problem. The complexity of a particular problem can be characterized by upper and lower bounds on computatihal resources sufficient to solve the problem. Upper bounds are usually established by exhibiting a specific algorithm which solves the problem and whose time and/or space complexity can be bounded from above. Much progress has been made on this positive side of the complexity question. Many clever and efficient algorithms have been devised for performing a wide variety of computational tasks (cf. D.E. Knuth, ---The Art of Computer Programming). However the progress made on the negative side of the question has been less striking. In order to establish a lower bound or. the complexity of a particulzr problem, one must show that some mfnimum amount of resource (time or space) is always required no matter which of the infinitely many possible algorithms is used or how cleverly one writes the algorithm to solve the problem. It is this latter side of the complexity question which we address in thispaper . Although lower bound resuits are negative in nature, they have the value that they enable one to cease lookicg for efficient algorithm when none exist. Also, the exhibition of specific problems or classes of problems which are provably difficult may give insight into the "reasons" for their difficulty, and these "reasons" and proofs of difficulty may provide clues for reformulating the problems so that in revised form they become tractable. Let us now sketch a bit more precisely what we mean by "computational i- problem" and "algorithm" . Many computational problems can be viewed as problems of function evaluation. In particular, consider functions mapping strings of symbols to strings of symbols. As a concept of "algorithm" we could choose any one of a variety of universal computer models. For definiteness we choose the well-known Turing machine model. A Turing machine M computes the function f if M, when started with any string x on its tape, eventually halts with f(x) on its tape. The time ~,ndspace used by M on input x are respectively the number of basic steps executed and the number of tape squares visited by M before halting when started on input x. In general, the time and space will vary depending on the particular input x. One simplification which is commonly made is to measure the time and space solely as a function of the length of the input string. Note that some functions can be complex for a reason which sheds little light on the question of inherent difficulty; namely, a function can be computed no faster than the time required to print the value of '~om~letedefinitions appear in the main text. the function. For example, consider the function which, for any positive integer m, maps the binary representation of m to the binary representation of 2m. Any algorithm which computes this function uses at least 2" steps on many inputs of length n for all n, these steps being required to print the answer consisting of a one followed by as many as 2n-l zeroes. We avoid these cases by considering only functions whose value is always 0 or 1. The problem of computing such a 0-1 valued function f can be viewed as the problem of recognizing the set of inputs which f maps to 1. For example, we may wish to recognize the set of all strings which code true sentences of some decidab'le logical theory. When such a "set recognition" or "decision" problem is shown to n require time 2 on inputs of length n for infinitely many n, we conclude that there is something inherently complex about the set itself; that is, n 2 steps must be spent in deciding what to answer, not in printing the answer. Some information is known concerning the complexity of set recognition problems. There are known to be sets whose recognition problems are recursive yet "arbitrarily" complex [Rab60]. Let T(n) and S(n) be any recursive functions from positive integers to positive integers. Well-known diagonalization arguments imply the existence of a recursive set such that any algorithm recognizing Ah %ard ard requires at least time T(n) and space S(n) on all inputs of length n for all sufficiently large n. It is also possible to construct arbitrarily difficult recursive problems by considering "bounded" versions of undecidable problems. The "bound" implies decidability, but the problem can be made arbi- trarily complex by making the "bound" arbitrarily large. For example, Blum [B166] and Jeroslow [Jer72] consider a bounded version of the halting problem, and Ehrenfeucht [Ehr72] considers a bounded version of the first order theory of integer arithmetic. One might animadvert that sets such as above are not "natural" %ard in the sense that they were explicitly constructed to be difficult to recognize. Informally, by "natural" computational problem we mean one which has arisen previously in the mathematical literature (excluding complexity theory); for example, decision problems drawn from logic and automata theory, word problems in algebra, etc. Under even this weak view of "natural", there are few examples of natural recursive set recognition problems whose time complexity has been shown to necessarily grow faster than linearly in the length of the input. Excluding "diagonalization" and "bounded undecidable" problems, then prior to the research described here (and related work by Meyer [Mey73], Fischer and Rabin [FR74], and Hunt [Hun73b]) we h-ow of no examples of natural recursive set recognition problems whose time complexity had been shown to necessarily grow more than polynomially or whose space complexity had been shown to grow more than linearly in the length of the input. We now outline the remainder of this paper. Chapters 2 and 3 are devoted mainly to definitions of key concepts and descriptions of the technical machinery to be used in proving the results of Chapters 4 and 5. Chapter 2 defines our formal model of "algorithm" for set recognition and function computation. This model is a slight variant of the well-known Turing machine. Known facts concerning the model which are relevant to the sequel are also stated. Chapter 3 defines the concept of "efficient reducibility". This concept was first formally defined by Cook [Co7la], though its significance was emphasized earlier by Meyer and McCreight [MM71]. Speaking informally for the moment, we say that a set A is efficiently reducible to a set B, written A S B, if there is an efficiently ef f computable function f such that any question of the form "Is x in A?" has the same answer as the question "Is f(x) in B?". Instead of being precise about what is meant by f being "efficiently computable", let us for the moment just assume that the time and space required to compute f is very small compared to the minimum time required to recognize A or B. Now given an algorithm M which recognizes B, one can construct an algorithm M' which recognizes A as follows.
Recommended publications
  • Infinitary Logic and Inductive Definability Over Finite Structures
    University of Pennsylvania ScholarlyCommons Technical Reports (CIS) Department of Computer & Information Science November 1991 Infinitary Logic and Inductive Definability Over Finite Structures Anuj Dawar University of Pennsylvania Steven Lindell University of Pennsylvania Scott Weinstein University of Pennsylvania Follow this and additional works at: https://repository.upenn.edu/cis_reports Recommended Citation Anuj Dawar, Steven Lindell, and Scott Weinstein, "Infinitary Logic and Inductive Definability Over Finite Structures", . November 1991. University of Pennsylvania Department of Computer and Information Science Technical Report No. MS-CIS-91-97. This paper is posted at ScholarlyCommons. https://repository.upenn.edu/cis_reports/365 For more information, please contact [email protected]. Infinitary Logic and Inductive Definability Over Finite Structures Abstract The extensions of first-order logic with a least fixed point operators (FO + LFP) and with a partial fixed point operator (FO + PFP) are known to capture the complexity classes P and PSPACE respectively in the presence of an ordering relation over finite structures. Recently, Abiteboul and Vianu [AV91b] investigated the relation of these two logics in the absence of an ordering, using a mchine model of generic computation. In particular, they showed that the two languages have equivalent expressive power if and only if P = PSPACE. These languages can also be seen as fragments of an infinitary logic where each ω formula has a bounded number of variables, L ∞ω (see, for instance, [KV90]). We present a treatment of the results in [AV91b] from this point of view. In particular, we show that we can write a formula of FO + LFP and P from ordered structures to classes of structures where every element is definable.
    [Show full text]
  • NP-Completeness (Chapter 8)
    CSE 421" Algorithms NP-Completeness (Chapter 8) 1 What can we feasibly compute? Focus so far has been to give good algorithms for specific problems (and general techniques that help do this). Now shifting focus to problems where we think this is impossible. Sadly, there are many… 2 History 3 A Brief History of Ideas From Classical Greece, if not earlier, "logical thought" held to be a somewhat mystical ability Mid 1800's: Boolean Algebra and foundations of mathematical logic created possible "mechanical" underpinnings 1900: David Hilbert's famous speech outlines program: mechanize all of mathematics? http://mathworld.wolfram.com/HilbertsProblems.html 1930's: Gödel, Church, Turing, et al. prove it's impossible 4 More History 1930/40's What is (is not) computable 1960/70's What is (is not) feasibly computable Goal – a (largely) technology-independent theory of time required by algorithms Key modeling assumptions/approximations Asymptotic (Big-O), worst case is revealing Polynomial, exponential time – qualitatively different 5 Polynomial Time 6 The class P Definition: P = the set of (decision) problems solvable by computers in polynomial time, i.e., T(n) = O(nk) for some fixed k (indp of input). These problems are sometimes called tractable problems. Examples: sorting, shortest path, MST, connectivity, RNA folding & other dyn. prog., flows & matching" – i.e.: most of this qtr (exceptions: Change-Making/Stamps, Knapsack, TSP) 7 Why "Polynomial"? Point is not that n2000 is a nice time bound, or that the differences among n and 2n and n2 are negligible. Rather, simple theoretical tools may not easily capture such differences, whereas exponentials are qualitatively different from polynomials and may be amenable to theoretical analysis.
    [Show full text]
  • On the Complexity of Numerical Analysis
    On the Complexity of Numerical Analysis Eric Allender Peter B¨urgisser Rutgers, the State University of NJ Paderborn University Department of Computer Science Department of Mathematics Piscataway, NJ 08854-8019, USA DE-33095 Paderborn, Germany [email protected] [email protected] Johan Kjeldgaard-Pedersen Peter Bro Miltersen PA Consulting Group University of Aarhus Decision Sciences Practice Department of Computer Science Tuborg Blvd. 5, DK 2900 Hellerup, Denmark IT-parken, DK 8200 Aarhus N, Denmark [email protected] [email protected] Abstract In Section 1.3 we discuss our main technical contribu- tions: proving upper and lower bounds on the complexity We study two quite different approaches to understand- of PosSLP. In Section 1.4 we present applications of our ing the complexity of fundamental problems in numerical main result with respect to the Euclidean Traveling Sales- analysis. We show that both hinge on the question of under- man Problem and the Sum-of-Square-Roots problem. standing the complexity of the following problem, which we call PosSLP: Given a division-free straight-line program 1.1 Polynomial Time Over the Reals producing an integer N, decide whether N>0. We show that PosSLP lies in the counting hierarchy, and combining The Blum-Shub-Smale model of computation over the our results with work of Tiwari, we show that the Euclidean reals provides a very well-studied complexity-theoretic set- Traveling Salesman Problem lies in the counting hierarchy ting in which to study the computational problems of nu- – the previous best upper bound for this important problem merical analysis.
    [Show full text]
  • The Complexity Zoo
    The Complexity Zoo Scott Aaronson www.ScottAaronson.com LATEX Translation by Chris Bourke [email protected] 417 classes and counting 1 Contents 1 About This Document 3 2 Introductory Essay 4 2.1 Recommended Further Reading ......................... 4 2.2 Other Theory Compendia ............................ 5 2.3 Errors? ....................................... 5 3 Pronunciation Guide 6 4 Complexity Classes 10 5 Special Zoo Exhibit: Classes of Quantum States and Probability Distribu- tions 110 6 Acknowledgements 116 7 Bibliography 117 2 1 About This Document What is this? Well its a PDF version of the website www.ComplexityZoo.com typeset in LATEX using the complexity package. Well, what’s that? The original Complexity Zoo is a website created by Scott Aaronson which contains a (more or less) comprehensive list of Complexity Classes studied in the area of theoretical computer science known as Computa- tional Complexity. I took on the (mostly painless, thank god for regular expressions) task of translating the Zoo’s HTML code to LATEX for two reasons. First, as a regular Zoo patron, I thought, “what better way to honor such an endeavor than to spruce up the cages a bit and typeset them all in beautiful LATEX.” Second, I thought it would be a perfect project to develop complexity, a LATEX pack- age I’ve created that defines commands to typeset (almost) all of the complexity classes you’ll find here (along with some handy options that allow you to conveniently change the fonts with a single option parameters). To get the package, visit my own home page at http://www.cse.unl.edu/~cbourke/.
    [Show full text]
  • Computability (Section 12.3) Computability
    Computability (Section 12.3) Computability • Some problems cannot be solved by any machine/algorithm. To prove such statements we need to effectively describe all possible algorithms. • Example (Turing Machines): Associate a Turing machine with each n 2 N as follows: n $ b(n) (the binary representation of n) $ a(b(n)) (b(n) split into 7-bit ASCII blocks, w/ leading 0's) $ if a(b(n)) is the syntax of a TM then a(b(n)) else (0; a; a; S,halt) fi • So we can effectively describe all possible Turing machines: T0; T1; T2;::: Continued • Of course, we could use the same technique to list all possible instances of any computational model. For example, we can effectively list all possible Simple programs and we can effectively list all possible partial recursive functions. • If we want to use the Church-Turing thesis, then we can effectively list all possible solutions (e.g., Turing machines) to every intuitively computable problem. Decidable. • Is an arbitrary first-order wff valid? Undecidable and partially decidable. • Does a DFA accept infinitely many strings? Decidable • Does a PDA accept a string s? Decidable Decision Problems • A decision problem is a problem that can be phrased as a yes/no question. Such a problem is decidable if an algorithm exists to answer yes or no to each instance of the problem. Otherwise it is undecidable. A decision problem is partially decidable if an algorithm exists to halt with the answer yes to yes-instances of the problem, but may run forever if the answer is no.
    [Show full text]
  • Canonical Models and the Complexity of Modal Team Logic
    On the Complexity of Team Logic and its Two-Variable Fragment Martin Lück Leibniz Universität Hannover, Germany [email protected] Abstract. We study the logic FO(∼), the extension of first-order logic with team semantics by unrestricted Boolean negation. It was recently shown axiomatizable, but otherwise has not yet received much attention in questions of computational complexity. In this paper, we consider its two-variable fragment FO2(∼) and prove that its satisfiability problem is decidable, and in fact complete for the recently introduced non-elementary class TOWER(poly). Moreover, we classify the complexity of model checking of FO(∼) with respect to the number of variables and the quantifier rank, and prove a di- chotomy between PSPACE- and ATIME-ALT(exp, poly)-completeness. To achieve the lower bounds, we propose a translation from modal team logic MTL to FO2(∼) that extends the well-known standard translation from modal logic ML to FO2. For the upper bounds, we translate to a fragment of second-order logic. Keywords: team semantics, two-variable logic, complexity, satisfiability, model checking 2012 ACM Subject Classification: Theory of computation → Complexity theory and logic; Logic; 1. Introduction In the last decades, the work of logicians has unearthed a plethora of decidable fragments of first-order logic FO. Many of these cases are restricted quantifier prefixes, such as the BSR-fragment which contains only ∃∗∀∗-sentences [30]. Others include the guarded arXiv:1804.04968v1 [cs.LO] 13 Apr 2018 fragment GF [1], the recently introduced separated fragment SF [32, 34], or the two-variable fragment FO2 [13, 27, 31].
    [Show full text]
  • LSPACE VS NP IS AS HARD AS P VS NP 1. Introduction in Complexity
    LSPACE VS NP IS AS HARD AS P VS NP FRANK VEGA Abstract. The P versus NP problem is a major unsolved problem in com- puter science. This consists in knowing the answer of the following question: Is P equal to NP? Another major complexity classes are LSPACE, PSPACE, ESPACE, E and EXP. Whether LSPACE = P is a fundamental question that it is as important as it is unresolved. We show if P = NP, then LSPACE = NP. Consequently, if LSPACE is not equal to NP, then P is not equal to NP. According to Lance Fortnow, it seems that LSPACE versus NP is easier to be proven. However, with this proof we show this problem is as hard as P versus NP. Moreover, we prove the complexity class P is not equal to PSPACE as a direct consequence of this result. Furthermore, we demonstrate if PSPACE is not equal to EXP, then P is not equal to NP. In addition, if E = ESPACE, then P is not equal to NP. 1. Introduction In complexity theory, a function problem is a computational problem where a single output is expected for every input, but the output is more complex than that of a decision problem [6]. A functional problem F is defined as a binary relation (x; y) 2 R over strings of an arbitrary alphabet Σ: R ⊂ Σ∗ × Σ∗: A Turing machine M solves F if for every input x such that there exists a y satisfying (x; y) 2 R, M produces one such y, that is M(x) = y [6].
    [Show full text]
  • Language and Automata Theory and Applications
    LANGUAGE AND AUTOMATA THEORY AND APPLICATIONS Carlos Martín-Vide Characterization • It deals with the description of properties of sequences of symbols • Such an abstract characterization explains the interdisciplinary flavour of the field • The theory grew with the need of formalizing and describing the processes linked with the use of computers and communication devices, but its origins are within mathematical logic and linguistics A bit of history • Early roots in the work of logicians at the beginning of the XXth century: Emil Post, Alonzo Church, Alan Turing Developments motivated by the search for the foundations of the notion of proof in mathematics (Hilbert) • After the II World War: Claude Shannon, Stephen Kleene, John von Neumann Development of computers and telecommunications Interest in exploring the functions of the human brain • Late 50s XXth century: Noam Chomsky Formal methods to describe natural languages • Last decades Molecular biology considers the sequences of molecules formed by genomes as sequences of symbols on the alphabet of basic elements Interest in describing properties like repetitions of occurrences or similarity between sequences Chomsky hierarchy of languages • Finite-state or regular • Context-free • Context-sensitive • Recursively enumerable REG ⊂ CF ⊂ CS ⊂ RE Finite automata: origins • Warren McCulloch & Walter Pitts. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5:115-133, 1943 • Stephen C. Kleene. Representation of events in nerve nets and
    [Show full text]
  • 2020 SIGACT REPORT SIGACT EC – Eric Allender, Shuchi Chawla, Nicole Immorlica, Samir Khuller (Chair), Bobby Kleinberg September 14Th, 2020
    2020 SIGACT REPORT SIGACT EC – Eric Allender, Shuchi Chawla, Nicole Immorlica, Samir Khuller (chair), Bobby Kleinberg September 14th, 2020 SIGACT Mission Statement: The primary mission of ACM SIGACT (Association for Computing Machinery Special Interest Group on Algorithms and Computation Theory) is to foster and promote the discovery and dissemination of high quality research in the domain of theoretical computer science. The field of theoretical computer science is the rigorous study of all computational phenomena - natural, artificial or man-made. This includes the diverse areas of algorithms, data structures, complexity theory, distributed computation, parallel computation, VLSI, machine learning, computational biology, computational geometry, information theory, cryptography, quantum computation, computational number theory and algebra, program semantics and verification, automata theory, and the study of randomness. Work in this field is often distinguished by its emphasis on mathematical technique and rigor. 1. Awards ▪ 2020 Gödel Prize: This was awarded to Robin A. Moser and Gábor Tardos for their paper “A constructive proof of the general Lovász Local Lemma”, Journal of the ACM, Vol 57 (2), 2010. The Lovász Local Lemma (LLL) is a fundamental tool of the probabilistic method. It enables one to show the existence of certain objects even though they occur with exponentially small probability. The original proof was not algorithmic, and subsequent algorithmic versions had significant losses in parameters. This paper provides a simple, powerful algorithmic paradigm that converts almost all known applications of the LLL into randomized algorithms matching the bounds of the existence proof. The paper further gives a derandomized algorithm, a parallel algorithm, and an extension to the “lopsided” LLL.
    [Show full text]
  • Homework 4 Solutions Uploaded 4:00Pm on Dec 6, 2017 Due: Monday Dec 4, 2017
    CS3510 Design & Analysis of Algorithms Section A Homework 4 Solutions Uploaded 4:00pm on Dec 6, 2017 Due: Monday Dec 4, 2017 This homework has a total of 3 problems on 4 pages. Solutions should be submitted to GradeScope before 3:00pm on Monday Dec 4. The problem set is marked out of 20, you can earn up to 21 = 1 + 8 + 7 + 5 points. If you choose not to submit a typed write-up, please write neat and legibly. Collaboration is allowed/encouraged on problems, however each student must independently com- plete their own write-up, and list all collaborators. No credit will be given to solutions obtained verbatim from the Internet or other sources. Modifications since version 0 1. 1c: (changed in version 2) rewored to showing NP-hardness. 2. 2b: (changed in version 1) added a hint. 3. 3b: (changed in version 1) added comment about the two loops' u variables being different due to scoping. 0. [1 point, only if all parts are completed] (a) Submit your homework to Gradescope. (b) Student id is the same as on T-square: it's the one with alphabet + digits, NOT the 9 digit number from student card. (c) Pages for each question are separated correctly. (d) Words on the scan are clearly readable. 1. (8 points) NP-Completeness Recall that the SAT problem, or the Boolean Satisfiability problem, is defined as follows: • Input: A CNF formula F having m clauses in n variables x1; x2; : : : ; xn. There is no restriction on the number of variables in each clause.
    [Show full text]
  • Glossary of Complexity Classes
    App endix A Glossary of Complexity Classes Summary This glossary includes selfcontained denitions of most complexity classes mentioned in the b o ok Needless to say the glossary oers a very minimal discussion of these classes and the reader is re ferred to the main text for further discussion The items are organized by topics rather than by alphab etic order Sp ecically the glossary is partitioned into two parts dealing separately with complexity classes that are dened in terms of algorithms and their resources ie time and space complexity of Turing machines and complexity classes de ned in terms of nonuniform circuits and referring to their size and depth The algorithmic classes include timecomplexity based classes such as P NP coNP BPP RP coRP PH E EXP and NEXP and the space complexity classes L NL RL and P S P AC E The non k uniform classes include the circuit classes P p oly as well as NC and k AC Denitions and basic results regarding many other complexity classes are available at the constantly evolving Complexity Zoo A Preliminaries Complexity classes are sets of computational problems where each class contains problems that can b e solved with sp ecic computational resources To dene a complexity class one sp ecies a mo del of computation a complexity measure like time or space which is always measured as a function of the input length and a b ound on the complexity of problems in the class We follow the tradition of fo cusing on decision problems but refer to these problems using the terminology of promise problems
    [Show full text]
  • The Classes FNP and TFNP
    Outline The classes FNP and TFNP C. Wilson1 1Lane Department of Computer Science and Electrical Engineering West Virginia University Christopher Wilson Function Problems Outline Outline 1 Function Problems defined What are Function Problems? FSAT Defined TSP Defined 2 Relationship between Function and Decision Problems RL Defined Reductions between Function Problems 3 Total Functions Defined Total Functions Defined FACTORING HAPPYNET ANOTHER HAMILTON CYCLE Christopher Wilson Function Problems Outline Outline 1 Function Problems defined What are Function Problems? FSAT Defined TSP Defined 2 Relationship between Function and Decision Problems RL Defined Reductions between Function Problems 3 Total Functions Defined Total Functions Defined FACTORING HAPPYNET ANOTHER HAMILTON CYCLE Christopher Wilson Function Problems Outline Outline 1 Function Problems defined What are Function Problems? FSAT Defined TSP Defined 2 Relationship between Function and Decision Problems RL Defined Reductions between Function Problems 3 Total Functions Defined Total Functions Defined FACTORING HAPPYNET ANOTHER HAMILTON CYCLE Christopher Wilson Function Problems Function Problems What are Function Problems? Function Problems FSAT Defined Total Functions TSP Defined Outline 1 Function Problems defined What are Function Problems? FSAT Defined TSP Defined 2 Relationship between Function and Decision Problems RL Defined Reductions between Function Problems 3 Total Functions Defined Total Functions Defined FACTORING HAPPYNET ANOTHER HAMILTON CYCLE Christopher Wilson Function Problems Function
    [Show full text]