Introduction to the Theory of Complexity

Total Page:16

File Type:pdf, Size:1020Kb

Introduction to the Theory of Complexity Introduction to the theory of complexity Daniel Pierre Bovet Pierluigi Crescenzi The information in this book is distributed on an “As is” basis, without warranty. Although every precaution has been taken in the preparation of this work, the authors shall not have any liability to any person or entity with respect to any loss or damage caused or alleged to be caused directly or indirectly by the information contained in this work. First electronic edition: June 2006 Contents 1 Mathematical preliminaries 1 1.1 Sets, relations and functions 1 1.2 Set cardinality 5 1.3 Three proof techniques 5 1.4 Graphs 8 1.5 Alphabets, words and languages 10 2 Elements of computability theory 12 2.1 Turing machines 13 2.2 Machines and languages 26 2.3 Reducibility between languages 28 3 Complexity classes 33 3.1 Dynamic complexity measures 34 3.2 Classes of languages 36 3.3 Decision problems and languages 38 3.4 Time-complexity classes 41 3.5 The pseudo-Pascal language 47 4 The class P 51 4.1 The class P 52 4.2 The robustness of the class P 57 4.3 Polynomial-time reducibility 60 4.4 Uniform diagonalization 62 5 The class NP 69 5.1 The class NP 70 5.2 NP-complete languages 72 v vi 5.3 NP-intermediate languages 88 5.4 Computing and verifying a function 92 5.5 Relativization of the P = NP conjecture 95 6 6 The complexity of optimization problems 110 6.1 Optimization problems 111 6.2 Underlying languages 115 6.3 Optimum measure versus optimum solution 117 6.4 Approximability 119 6.5 Reducibility and optimization problems 125 7 Beyond NP 133 7.1 The class coNP 134 7.2 The Boolean hierarchy 140 7.3 The polynomial hierarchy 142 7.4 Exponential-time complexity classes 150 8 Space-complexity classes 156 8.1 Space-complexity classes 157 8.2 Relations between time and space 157 8.3 Nondeterminism, determinism and space 160 8.4 Nondeterminism, complement and space 162 8.5 Logarithmic space 164 8.6 Polynomial space 171 9 Probabilistic algorithms and complexity classes 178 9.1 Some probabilistic algorithms 179 9.2 Probabilistic Turing machines 188 9.3 Probabilistic complexity classes 194 10 Interactive proof systems 203 10.1 Interactive proof systems 204 10.2 The power of IP 208 10.3 Probabilistic checking of proofs 214 11 Models of parallel computers 221 11.1 Circuits 222 11.2 The PRAM model 230 11.3 PRAM memory conflicts 232 11.4 A comparison of the PRAM models 233 11.5 Relations between circuits and PRAMs 238 11.6 The parallel computation thesis 240 vii 12 Parallel algorithms 246 12.1 The class NC 247 12.2 Examples of NC problems 250 12.3 Probabilistic parallel algorithms 256 12.4 P-complete problems revisited 259 Preface The birth of the theory of computational complexity can be set in the early 1960s when the first users of electronic computers started to pay increasing attention to the performances of their programs. As in the theory of computation, where the concept of a model of computation had led to that of an algorithm and of an algo- rithmically solvable problem, similarly, in the theory of computational complexity, the concept of resource used by a computation led to that of an efficient algorithm and of a computationally feasible problem. Since these preliminary stages, many more results have been obtained and, as stated by Hartmanis (1989), ‘the systematic study of computational complexity theory has developed into one of the central and most active research areas of computer science. It has grown into a rich and exciting mathematical theory whose development is motivated and guided by computer science needs and technological advances.’ The aim of this introductory book is to review in a systematic way the most significant results obtained in this new research area. The main goals of compu- tational complexity theory are to introduce classes of problems which have similar complexity with respect to a specific computation model and complexity measure, and to study the intrinsic properties of such classes. In this book, we will follow a balanced approach which is partly algorithmic and partly structuralist. From an algorithmic point of view, we will first present some ‘natural’ problems and then illustrate algorithms which solve them. Since the aim is merely to prove that the problem belongs to a specific class, we will not always give the most efficient algorithm and we will occasionally give preference to an algorithm which is simpler to describe and analyse. From a structural point of view, we will be concerned with intrinsic properties of complexity classes, including relationships between classes, implications between several hypotheses about complexity classes, and identification of structural prop- erties of sets that a↵ect their computational complexity. The reader is assumed to have some basic knowledge of theory of computation (as ix x Preface taught in an undergraduate course on Automata Theory, Logic, Formal Languages Theory, or Theory of Computation) and of programming languages and techniques. Some mathematical knowledge is also required. The first eight chapters of the book can be taught on a senior undergraduate course. The whole book together with an exhaustive discussion of the problems should be suitable for a postgraduate course. Let us now briefly review the contents of the book and the choices made in selecting the material. The first part (Chapters 1-3) provide the basic tools which will enable us to study topics in complexity theory. Chapter 1 includes a series of definitions and notations related to classic mathematical concepts such as sets, relationships and languages (this chapter can be skipped and referred to when needed). Chapter 2 reviews some important results of computability theory. Chapter 3 provides the basic tool of complexity theory: dynamic complexity measures are introduced, the concept of classes of languages is presented, the strict correspondence between such classes and decision problems is established, and techniques used to study the properties of such classes are formulated. The second part (Chapters 4-8) studies, in a detailed way, the properties of some of the most significant complexity classes. Those chapters represent the ‘heart’ of complexity theory: by placing suitable restrictions on the power of the computation model, and thus on the amount of resources allowed for the computation, it becomes possible to define a few fundamental complexity classes and to develop a series of tools enabling us to identify, for most computational problems, the complexity class to which they belong. The third part (Chapters 9-10) deals with probabilistic algorithms and with the corresponding complexity classes. Probabilistic Turing machines are introduced in Chapter 9 and a few probabilistic algorithms for such machines are analysed. In Chapter 10, a more elaborate computation model denoted as interactive proof system is considered and a new complexity class based on such a model is studied. The last part (Chapters 11 and 12) is dedicated to the complexity of parallel computations. As a result of advances in hardware technology, computers with thousands of processors are now available; it thus becomes important, not only from a theoretical point of view but also from a practical one, to be able to specify which problems are best suited to be run on parallel machines. Chapter 11 describes in detail a few important and widely di↵ering models of parallel computers and shows how their performance can be considered roughly equivalent. Chapter 12 introduces the concept of a problem solvable by a fast parallel algorithm and the complementary one of a problem with no fast parallel algorithm, and illustrates examples of both types of problems. While selecting material to be included in the book, we followed a few guidelines. First, we have focused our attention on results obtained in the past two decades, mentioning without proof or leaving as problems some well-known results obtained in the 1960s. Second, whenever a proof of a theorem uses a technique described in a previous proof, we have provided an outline, leaving the complete proof as a Preface xi problem for the reader. Finally, we have systematically avoided stating without proof specialistic results in other fields in order to make the book as self-contained as possible. Acknowledgements This book originated from a course on Algorithms and Complexity given at the University of Rome ‘La Sapienza’ by D.P. Bovet and P. Crescenzi since 1986. We would like to thank the students who were exposed to the preliminary versions of the chapters and who contributed their observations to improve the quality of the presentation. We also would like to thank R. Silvestri for pointing out many corrections and for suggesting simpler and clearer proofs of some results. Chapter 1 Mathematical preliminaries In this chapter some preliminary definitions, notations and proof techniques which are going to be used in the rest of the book will be introduced. 1.1 Sets, relations and functions Intuitively, a set A is any collection of elements. If a, b and c are arbitrary elements, then the set A consisting of elements a, b and c is represented as A = a, b, c . { } A set cannot contain more than one copy or instance of the same element; furthermore, the order in which the elements of the set appear is irrelevant. Thus we define two sets A and B as equal (in symbols, A = B) if every element of A is also an element of B and vice versa. Two sets A and B are not equal (in symbols, A = B) when A = B does not hold true. 6 The symbols and denote, respectively, the fact that an element belongs or does not belong2 to a set.62 Referring to the previous example, it is clear that a A and that d A.
Recommended publications
  • Models of Quantum Computation and Quantum Programming Languages
    Models of quantum computation and quantum programming languages Jaros law Adam MISZCZAK Institute of Theoretical and Applied Informatics, Polish Academy of Sciences, Ba ltycka 5, 44-100 Gliwice, Poland The goal of the presented paper is to provide an introduction to the basic computational mod- els used in quantum information theory. We review various models of quantum Turing machine, quantum circuits and quantum random access machine (QRAM) along with their classical counter- parts. We also provide an introduction to quantum programming languages, which are developed using the QRAM model. We review the syntax of several existing quantum programming languages and discuss their features and limitations. I. INTRODUCTION of existing models of quantum computation is biased towards the models interesting for the development of Computational process must be studied using the fixed quantum programming languages. Thus we neglect some model of computational device. This paper introduces models which are not directly related to this area (e.g. the basic models of computation used in quantum in- quantum automata or topological quantum computa- formation theory. We show how these models are defined tion). by extending classical models. We start by introducing some basic facts about clas- A. Quantum information theory sical and quantum Turing machines. These models help to understand how useful quantum computing can be. It can be also used to discuss the difference between Quantum information theory is a new, fascinating field quantum and classical computation. For the sake of com- of research which aims to use quantum mechanical de- pleteness we also give a brief introduction to the main res- scription of the system to perform computational tasks.
    [Show full text]
  • Complexity Theory
    Complexity Theory Lecture Notes October 27, 2016 Prof. Dr. Roland Meyer TU Braunschweig Winter Term 2016/2017 Preface These are the lecture notes accompanying the course Complexity Theory taught at TU Braunschweig. As of October 27, 2016, the notes cover 11 out of 28 lectures. The handwritten notes on the missing lectures can be found at http://concurrency.cs.uni-kl.de. The present document is work in progress and comes with neither a guarantee of completeness wrt. the con- tent of the classes nor a guarantee of correctness. In case you spot a bug, please send a mail to [email protected]. Peter Chini, Judith Stengel, Sebastian Muskalla, and Prakash Saivasan helped turning initial handwritten notes into LATEX, provided valuable feed- back, and came up with interesting exercises. I am grateful for their help! Enjoy the study! Roland Meyer i Introduction Complexity Theory has two main goals: • Study computational models and programming constructs in order to understand their power and limitations. • Study computational problems and their inherent complexity. Com- plexity usually means time and space requirements (on a particular model). It can also be defined by other measurements, like random- ness, number of alternations, or circuit size. Interestingly, an impor- tant observation of computational complexity is that the precise model is not important. The classes of problems are robust against changes in the model. Background: • The theory of computability goes back to Presburger, G¨odel,Church, Turing, Post, Kleene (first half of the 20th century). It gives us methods to decide whether a problem can be computed (de- cided by a machine) at all.
    [Show full text]
  • La Machine De Turing
    LA MACHINE DE TURING COMPILATION REALISATION A L’AIDE DE LEGOS Turing machine This article is about the symbol manipulator. For the de- expressing all tasks accomplishable by computers; nearly ciphering machine, see Bombe. For the test of artificial all programming languages are Turing complete if the intelligence, see Turing test. For the instrumental rock limitations of finite memory are ignored. band, see Turing Machine (band). Classes of automata 1 Overview A Turing machine is an abstract machine[1] that ma- nipulates symbols on a strip of tape according to a ta- A Turing machine is a general example of a CPU that ble of rules; to be more exact, it is a mathematical model controls all data manipulation done by a computer, with of computation that defines such a device.[2] Despite the the canonical machine using sequential memory to store model’s simplicity, given any computer algorithm, a Tur- data. More specifically, it is a machine (automaton) capa- ing machine can be constructed that is capable of simu- ble of enumerating some arbitrary subset of valid strings lating that algorithm’s logic.[3] of an alphabet; these strings are part of a recursively enu- The machine operates on an infinite[4] memory tape di- merable set. vided into cells.[5] The machine positions its head over Assuming a black box, the Turing machine cannot know a cell and “reads” (scans[6]) the symbol there. Then as whether it will eventually enumerate any one specific per the symbol and its present place in a finite table[7] of string of the subset with a given program.
    [Show full text]
  • Lecture 1 Review of Basic Computational Complexity
    Lecture 1 Review of Basic Computational Complexity March 30, 2004 Lecturer: Paul Beame Notes: Daniel Lowd 1.1 Preliminaries 1.1.1 Texts There is no one textbook that covers everything in this course. Some of the discoveries are simply too recent. For the first portion of the course, however, two supplementary texts may be useful. Michael Sipser, Introduction to the Theory of Computation. • Christos Papadimitrion, Computational Complexity • 1.1.2 Coursework Required work for this course consists of lecture notes and group assignments. Each student is assigned to take thorough lecture notes no more than twice during the quarter. The student must then type up the notes in expanded form in LATEX. Lecture notes may also contain additional references or details omitted in class. A LATEXstylesheet for this will soon be made available on the course webpage. Notes should be typed, submitted, revised, and posted to the course webpage within one week. Group assignments will be homework sets that may be done cooperatively. In fact, cooperation is encouraged. Credit to collaborators should be given. There will be no tests. 1.1.3 Overview The first part of the course will predominantly discuss complexity classes above NPrelate randomness, cir- cuits, and counting, to P, NP, and the polynomial time hierarchy. One of the motivating goals is to consider how one might separate P from classes above P. Results covered will range from the 1970’s to recent work, including tradeoffs between time and space complexity. In the middle section of the course we will cover the powerful role that interaction has played in our understanding of computational complexity and, in par- ticular, the powerful results on probabilistically checkable proofs (PCP).
    [Show full text]
  • Turing and the Development of Computational Complexity
    Turing and the Development of Computational Complexity Steven Homer Department of Computer Science Boston University Boston, MA 02215 Alan L. Selman Department of Computer Science and Engineering State University of New York at Buffalo 201 Bell Hall Buffalo, NY 14260 August 24, 2011 Abstract Turing's beautiful capture of the concept of computability by the \Turing ma- chine" linked computability to a device with explicit steps of operations and use of resources. This invention led in a most natural way to build the foundations for computational complexity. 1 Introduction Computational complexity provides mechanisms for classifying combinatorial problems and measuring the computational resources necessary to solve them. The discipline proves explanations of why certain problems have no practical solutions and provides a way of anticipating difficulties involved in solving problems of certain types. The classification is quantitative and is intended to investigate what resources are necessary, lower bounds, and what resources are sufficient, upper bounds, to solve various problems. 1 This classification should not depend on a particular computational model but rather should measure the intrinsic difficulty of a problem. Precisely for this reason, as we will explain, the basic model of computation for our study is the multitape Turing machine. Computational complexity theory today addresses issues of contemporary concern, for example, parallel computation, circuit design, computations that depend on random number generators, and development of efficient algorithms. Above all, computational complexity is interested in distinguishing problems that are efficiently computable. Al- gorithms whose running times are n2 in the size of their inputs can be implemented to execute efficiently even for fairly large values of n, but algorithms that require an expo- nential running time can be executed only for small values of n.
    [Show full text]
  • Time-Space Tradeoffs for Satisfiability 1 Introduction
    Time-Space Tradeoffs for Satisfiability Lance Fortnow∗ University of Chicago Department of Computer Science 1100 E. 58th. St. Chicago, IL 60637 Abstract We give the first nontrivial model-independent time-space tradeoffs for satisfiability. Namely, 1+o(1) 1 we show that SAT cannot be solved simultaneously in n time and n − space for any > 0 on general random-access nondeterministic Turing machines. In particular, SAT cannot be solved deterministically by a Turing machine using quasilinear time and pn space. We also give lower bounds for log-space uniform NC1 circuits and branching programs. Our proof uses two basic ideas. First we show that if SAT can be solved nondeterministi- cally with a small amount of time then we can collapse a nonconstant number of levels of the polynomial-time hierarchy. We combine this work with a result of Nepomnjaˇsˇci˘ıthat shows that a nondeterministic computation of super linear time and sublinear space can be simulated in alternating linear time. A simple diagonalization yields our main result. We discuss how these bounds lead to a new approach to separating the complexity classes NL and NP. We give some possibilities and limitations of this approach. 1 Introduction Separating complexity classes remains the most important and difficult of problems in theoretical computer science. Circuit complexity and other techniques on finite functions have seen some exciting early successes (see [BS90]) but have yet to achieve their promise of separating complexity classes above logarithmic space. Other techniques based on logic and geometry also have given us separations only on very restricted models. We should turn back to a traditional separation technique|diagonalization.
    [Show full text]