An Overview of Computational Complexity

Total Page:16

File Type:pdf, Size:1020Kb

An Overview of Computational Complexity lURING AWARD LECTURE AN OVERVIEW OF COMPUTATIONAL COMPLEXITY ~]~F.~ A. COOK University of Toronto This is the second Turing Award lecture on Computational Complexity, The first was given by Michael Rabin in 1976.1 In reading Rabin's excellent article [62] now, one of the things that strikes me is how much activity there has been in the field since. In this brief overview I want to mention what to me are the most important and interesting results since the subject began in about 1960. In such a large field the choice of ABSTRACT: An historical overview topics is inevitably somewhat personal; however, I hope to of computational complexity is include papers which, by any standards, are fundamental. presented. Emphasis is on the fundamental issues of defining the 1. EARLY PAPERS intrinsic computational complexity The prehistory of the subject goes back, appropriately, to Alan of a problem and proving upper Turing. In his 1937 paper, On computable numbers with an and lower bounds on the application to the Entscheidungs problem [85], Turing intro- complexity of problems. duced his famous Turing machine, which provided the most Probabilistic and parallel convincing formalization (up to that time) of the notion of an computation are discussed. effectively (or algorithmically) computable function. Once this Author's Present Address: notion was pinned down precisely, impossibility proofs for Stephen A. Cook, Department of Computer computers were possible. In the same paper Turing proved Science, that no algorithm (i.e., Turing machine) could, upon being Universityof Toronto, given an arbitrary formula of the predicate calculus, decide, Toronto, Canada M5S 1A7. in a finite number of steps, whether that formula was satisfia- Permission to copywithout fee ble. all or part of this materialis granted providedthat the After the theory explaining which problems can and can- copies are not made or not be solved by computer was well developed, it was natural distributedfor direct to ask about the relative computational difficulty of computa- commercialadvantage, the ble functions. This is the subject matter of computational ACM copyrightnotice and the title of the publicationand its complexity. Rabin [59, 60] was one of the first persons (1960) date appear, and notice is given to address this general question explicitly: what does it mean that copyingis by permissionof to say that f is more difficult to compute than g? Rabin the Associationfor Computing suggested an axiomatic framework that provided the basis for Machinery. To copy otherwise, or to republish,requires a fee the abstract complexity theory developed by Blum [6] and and/or specificpermission. others. ©1983 ACM 0001-0782/83/ A second early (1965) influential paper was On the compu- 0600-0401 75¢. tational complexity of a/gorithms by J. Hartmanis and R. E. Stearns [37]. 2 This paper was widely read and gave the field its rifle. The important notion of complexity measure defined by the computation time on multitape Turing machines was introduced, and hierarchy theorems were proved. The paper also posed an intriguing question that is still open today, ls Michael Rabin and Dana Scott shared the Turing Award in 1976. 2 See Harlmanis [36] for some interesting reminiscences. June1983 Volume26 Number6 Communications of the ACM 401 11JRING AWAlW LECTU~ any irrational algebraic number (such as x/2) computable in each operation involving an integer has unit cost, but inter- real time, that is, is there a Turing machine that prints out the gets are not allowed to become unreasonably large (for exam- decimal expansion of the number at the rate of one digit per ple, their magnitude might be bounded by some fixed poly- 100 steps forever. nomial in the size of the input). Probably the most mathemat- A third founding paper (1965) was The intrinsic computa- ically satisfying model is Sch6nhage's storage modification tional difficulty of functions by Alan Cobham [15]. Cobham machine [69], which can be viewed either as a Turing ma- emphasized the word "intrinsic," that is, he was interested in chine that builds its own storage structure or as a unit cost a machine-independent theory. He asked whether multiplica- RAM that can only copy, add or substract one, or store or tion is harder than addition, and believed that the question retrieve in one step. Sch6nhage's machine is a slight generali- could not be answered until the theory was properly devel- zation of the Kolmogorov-Uspenski machine proposed much oped. Cobham also defined and characterized the important earlier [46] (1958), and seems to me to represent the most class of functions he called ~." those functions on the natural general machine that could possibly be construed as doing a numbers computable in time bounded by a polynomial in the bounded amount of work in one step. The trouble is that it decimal length of the input. probably is a little too powerful. (See Section 3 under "large Three other papers that influenced the above authors as number mutiplication.") well as other complexity workers (including myself) are Returning to Cobham's question "what is a step," I think Yamada [91], Bennett [4], and Ritchie [66]. It is interesting to what has become clear in the last 20 years is that there is no note that Rabin, Stearns, Bennett, and Ritchie were all stu- single clear answer. Fortunately, the competing computer dents at Princeton at roughly the same time. models are not wildly different in computation time. In gen- eral, each can simulate any other by at most squaring the 2. EARLY ISSUES AND CONCEPTS computation time (some of the first arguments to this effect Several of the early authors were concerned with the ques- are in [37]). Among the leading random access models, there tion: What is the right complexity measure? Most mentioned is only a factor of log computation time in question. computation time or space as obvious choices, but were not This leads to the final important concept developed by convinced that these were the only or the right ones. For 1965--the identification of the class of problems solvable in example, Cobham [15] suggested "... some measure related to time bounded by a polynomial in the length of the input. The the physical notion of work [may] lead to the most satisfac- distinction between polynomial time and exponential time tory analysis." Rabin [60] introduced axioms which a com- algorithms was made as early as 1953 by yon Neumann [90]. plexity measure should satisfy. With the perspective of 20 However, the class was not defined formally and studied until years experience, I now think it is clear that time and space-- Cobham [15] introduced the class _L~of functions in 1964 (see especially time--are certainly among the most important Section 1). Cobham pointed out that the class was well de- complexity measures. It seems that the first figure of merit fined, independent of which computer model was chosen, given to evaluate the efficiency of an algorithm is its running and gave it a characterization in the spirit of recursive func- time. However, more recently it is becoming clear that paral- tion theory. The idea that polynomial time computability lel time and hardware size are important complexity meas- roughly corresponds to tractability was first expressed in print ures too (see Section 6). by Edmonds [27], who called polynomial time algorithms Another important complexity measure that goes back in "good algorithms." The now standard notation P for the class some form at least to Shannon [74] (1949) is Boolean circuit of polynomial time recognizable sets of strings was introduced (or combinational) complexity. Here it is convenient to as- later by Karp [42]. sume that the function f in question takes finite bit strings The identification of P with the tractable (or feasible) prob- into finite bit strings, and the complexity C (n) of f is the size lems has been generally accepted in the field since the early of the smallest Boolean circuit that computes f for all inputs of 1970's. It is not immediately obvious why this should be true, length n. This very natural measure is closely related to com- since an algorithm whose running time is the polynomial n 1~° putation time (see [57a], [57b], [68b]), and has a well devel- is surely not feasible, and conversely, one whose running oped theory in its own right (see Savage [68a]). time is the exponential 2~1" is feasible in practice. It seems Another question raised by Cobham [15] is what consti- to be an empirical fact, however, that naturally arising prob- tutes a "step" in a computation. This amounts to asking what lems do not have optimal algorithms with such running is the right computer model for measuring the computation times? The most notable practical algorithm that has an expo- time of an algorithm. Multitape Turing machines are com- nential worst case running time is the simplex algorithm for monly used in the literature, but they have artificial restric- linear programming. Smale [75, 76] attempts to explain this tions from the point of view of efficient implementation of by showing that, in some sense, the average running time is algorithms. For example, there is no compelling reason why fast, but it is also important to note that Khachian [43] the storage media should be linear tapes. Why not planar showed that linear programming is in P using another algo- arrays or trees? Why not allow a random access memory? rithm. Thus, our general thesis, that P equals the feasible In fact, quite a few computer models have been proposed problems, is not violated. since 1960. Since real computers have random access memo- ries, it seems natural to allow these in the model.
Recommended publications
  • Infinitary Logic and Inductive Definability Over Finite Structures
    University of Pennsylvania ScholarlyCommons Technical Reports (CIS) Department of Computer & Information Science November 1991 Infinitary Logic and Inductive Definability Over Finite Structures Anuj Dawar University of Pennsylvania Steven Lindell University of Pennsylvania Scott Weinstein University of Pennsylvania Follow this and additional works at: https://repository.upenn.edu/cis_reports Recommended Citation Anuj Dawar, Steven Lindell, and Scott Weinstein, "Infinitary Logic and Inductive Definability Over Finite Structures", . November 1991. University of Pennsylvania Department of Computer and Information Science Technical Report No. MS-CIS-91-97. This paper is posted at ScholarlyCommons. https://repository.upenn.edu/cis_reports/365 For more information, please contact repository@pobox.upenn.edu. Infinitary Logic and Inductive Definability Over Finite Structures Abstract The extensions of first-order logic with a least fixed point operators (FO + LFP) and with a partial fixed point operator (FO + PFP) are known to capture the complexity classes P and PSPACE respectively in the presence of an ordering relation over finite structures. Recently, Abiteboul and Vianu [AV91b] investigated the relation of these two logics in the absence of an ordering, using a mchine model of generic computation. In particular, they showed that the two languages have equivalent expressive power if and only if P = PSPACE. These languages can also be seen as fragments of an infinitary logic where each ω formula has a bounded number of variables, L ∞ω (see, for instance, [KV90]). We present a treatment of the results in [AV91b] from this point of view. In particular, we show that we can write a formula of FO + LFP and P from ordered structures to classes of structures where every element is definable.
    [Show full text]
  • Complexity Theory Lecture 9 Co-NP Co-NP-Complete
    Complexity Theory 1 Complexity Theory 2 co-NP Complexity Theory Lecture 9 As co-NP is the collection of complements of languages in NP, and P is closed under complementation, co-NP can also be characterised as the collection of languages of the form: ′ L = x y y <p( x ) R (x, y) { |∀ | | | | → } Anuj Dawar University of Cambridge Computer Laboratory NP – the collection of languages with succinct certificates of Easter Term 2010 membership. co-NP – the collection of languages with succinct certificates of http://www.cl.cam.ac.uk/teaching/0910/Complexity/ disqualification. Anuj Dawar May 14, 2010 Anuj Dawar May 14, 2010 Complexity Theory 3 Complexity Theory 4 NP co-NP co-NP-complete P VAL – the collection of Boolean expressions that are valid is co-NP-complete. Any language L that is the complement of an NP-complete language is co-NP-complete. Any of the situations is consistent with our present state of ¯ knowledge: Any reduction of a language L1 to L2 is also a reduction of L1–the complement of L1–to L¯2–the complement of L2. P = NP = co-NP • There is an easy reduction from the complement of SAT to VAL, P = NP co-NP = NP = co-NP • ∩ namely the map that takes an expression to its negation. P = NP co-NP = NP = co-NP • ∩ VAL P P = NP = co-NP ∈ ⇒ P = NP co-NP = NP = co-NP • ∩ VAL NP NP = co-NP ∈ ⇒ Anuj Dawar May 14, 2010 Anuj Dawar May 14, 2010 Complexity Theory 5 Complexity Theory 6 Prime Numbers Primality Consider the decision problem PRIME: Another way of putting this is that Composite is in NP.
    [Show full text]
  • Computational Complexity and Intractability: an Introduction to the Theory of NP Chapter 9 2 Objectives
    1 Computational Complexity and Intractability: An Introduction to the Theory of NP Chapter 9 2 Objectives . Classify problems as tractable or intractable . Define decision problems . Define the class P . Define nondeterministic algorithms . Define the class NP . Define polynomial transformations . Define the class of NP-Complete 3 Input Size and Time Complexity . Time complexity of algorithms: . Polynomial time (efficient) vs. Exponential time (inefficient) f(n) n = 10 30 50 n 0.00001 sec 0.00003 sec 0.00005 sec n5 0.1 sec 24.3 sec 5.2 mins 2n 0.001 sec 17.9 mins 35.7 yrs 4 “Hard” and “Easy” Problems . “Easy” problems can be solved by polynomial time algorithms . Searching problem, sorting, Dijkstra’s algorithm, matrix multiplication, all pairs shortest path . “Hard” problems cannot be solved by polynomial time algorithms . 0/1 knapsack, traveling salesman . Sometimes the dividing line between “easy” and “hard” problems is a fine one. For example, . Find the shortest path in a graph from X to Y (easy) . Find the longest path (with no cycles) in a graph from X to Y (hard) 5 “Hard” and “Easy” Problems . Motivation: is it possible to efficiently solve “hard” problems? Efficiently solve means polynomial time solutions. Some problems have been proved that no efficient algorithms for them. For example, print all permutation of a number n. However, many problems we cannot prove there exists no efficient algorithms, and at the same time, we cannot find one either. 6 Traveling Salesperson Problem . No algorithm has ever been developed with a Worst-case time complexity better than exponential .
    [Show full text]
  • Average-Case Analysis of Algorithms and Data Structures L’Analyse En Moyenne Des Algorithmes Et Des Structures De DonnEes
    (page i) Average-Case Analysis of Algorithms and Data Structures L'analyse en moyenne des algorithmes et des structures de donn¶ees Je®rey Scott Vitter 1 and Philippe Flajolet 2 Abstract. This report is a contributed chapter to the Handbook of Theoretical Computer Science (North-Holland, 1990). Its aim is to describe the main mathematical methods and applications in the average-case analysis of algorithms and data structures. It comprises two parts: First, we present basic combinatorial enumerations based on symbolic methods and asymptotic methods with emphasis on complex analysis techniques (such as singularity analysis, saddle point, Mellin transforms). Next, we show how to apply these general methods to the analysis of sorting, searching, tree data structures, hashing, and dynamic algorithms. The emphasis is on algorithms for which exact \analytic models" can be derived. R¶esum¶e. Ce rapport est un chapitre qui para^³t dans le Handbook of Theoretical Com- puter Science (North-Holland, 1990). Son but est de d¶ecrire les principales m¶ethodes et applications de l'analyse de complexit¶e en moyenne des algorithmes. Il comprend deux parties. Tout d'abord, nous donnons une pr¶esentation des m¶ethodes de d¶enombrements combinatoires qui repose sur l'utilisation de m¶ethodes symboliques, ainsi que des tech- niques asymptotiques fond¶ees sur l'analyse complexe (analyse de singularit¶es, m¶ethode du col, transformation de Mellin). Ensuite, nous d¶ecrivons l'application de ces m¶ethodes g¶enerales a l'analyse du tri, de la recherche, de la manipulation d'arbres, du hachage et des algorithmes dynamiques.
    [Show full text]
  • Slides 6, HT 2019 Space Complexity
    Computational Complexity; slides 6, HT 2019 Space complexity Prof. Paul W. Goldberg (Dept. of Computer Science, University of Oxford) HT 2019 Paul Goldberg Space complexity 1 / 51 Road map I mentioned classes like LOGSPACE (usually calledL), SPACE(f (n)) etc. How do they relate to each other, and time complexity classes? Next: Various inclusions can be proved, some more easy than others; let's begin with \low-hanging fruit"... e.g., I have noted: TIME(f (n)) is a subset of SPACE(f (n)) (easy!) We will see e.g.L is a proper subset of PSPACE, although it's unknown how they relate to various intermediate classes, e.g.P, NP Various interesting problems are complete for PSPACE, EXPTIME, and some of the others. Paul Goldberg Space complexity 2 / 51 Convention: In this section we will be using Turing machines with a designated read only input tape. So, \logarithmic space" becomes meaningful. Space Complexity So far, we have measured the complexity of problems in terms of the time required to solve them. Alternatively, we can measure the space/memory required to compute a solution. Important difference: space can be re-used Paul Goldberg Space complexity 3 / 51 Space Complexity So far, we have measured the complexity of problems in terms of the time required to solve them. Alternatively, we can measure the space/memory required to compute a solution. Important difference: space can be re-used Convention: In this section we will be using Turing machines with a designated read only input tape. So, \logarithmic space" becomes meaningful. Paul Goldberg Space complexity 3 / 51 Definition.
    [Show full text]
  • Lecture 12 – the Permanent and the Determinant
    Lecture 12 { The permanent and the determinant Uriel Feige Department of Computer Science and Applied Mathematics The Weizman Institute Rehovot 76100, Israel uriel.feige@weizmann.ac.il June 23, 2014 1 Introduction Given an order n matrix A, its permanent is X Yn per(A) = aiσ(i) σ i=1 where σ ranges over all permutations on n elements. Its determinant is X Yn σ det(A) = (−1) aiσ(i) σ i=1 where (−1)σ is +1 for even permutations and −1 for odd permutations. A permutation is even if it can be obtained from the identity permutation using an even number of transpo- sitions (where a transposition is a swap of two elements), and odd otherwise. For those more familiar with the inductive definition of the determinant, obtained by developing the determinant by the first row of the matrix, observe that the inductive defini- tion if spelled out leads exactly to the formula above. The same inductive definition applies to the permanent, but without the alternating sign rule. The determinant can be computed in polynomial time by Gaussian elimination, and in time n! by fast matrix multiplication. On the other hand, there is no polynomial time algorithm known for computing the permanent. In fact, Valiant showed that the permanent is complete for the complexity class #P , which makes computing it as difficult as computing the number of solutions of NP-complete problems (such as SAT, Valiant's reduction was from Hamiltonicity). For 0/1 matrices, the matrix A can be thought of as the adjacency matrix of a bipartite graph (we refer to it as a bipartite adjacency matrix { technically, A is an off-diagonal block of the usual adjacency matrix), and then the permanent counts the number of perfect matchings.
    [Show full text]
  • Deterministic Execution of Multithreaded Applications
    DETERMINISTIC EXECUTION OF MULTITHREADED APPLICATIONS FOR RELIABILITY OF MULTICORE SYSTEMS DETERMINISTIC EXECUTION OF MULTITHREADED APPLICATIONS FOR RELIABILITY OF MULTICORE SYSTEMS Proefschrift ter verkrijging van de graad van doctor aan de Technische Universiteit Delft, op gezag van de Rector Magnificus prof. ir. K. C. A. M. Luyben, voorzitter van het College voor Promoties, in het openbaar te verdedigen op vrijdag 19 juni 2015 om 10:00 uur door Hamid MUSHTAQ Master of Science in Embedded Systems geboren te Karachi, Pakistan Dit proefschrift is goedgekeurd door de promotor: Prof. dr. K. L. M. Bertels Copromotor: Dr. Z. Al-Ars Samenstelling promotiecommissie: Rector Magnificus, voorzitter Prof. dr. K. L. M. Bertels, Technische Universiteit Delft, promotor Dr. Z. Al-Ars, Technische Universiteit Delft, copromotor Independent members: Prof. dr. H. Sips, Technische Universiteit Delft Prof. dr. N. H. G. Baken, Technische Universiteit Delft Prof. dr. T. Basten, Technische Universiteit Eindhoven, Netherlands Prof. dr. L. J. M Rothkrantz, Netherlands Defence Academy Prof. dr. D. N. Pnevmatikatos, Technical University of Crete, Greece Keywords: Multicore, Fault Tolerance, Reliability, Deterministic Execution, WCET The work in this thesis was supported by Artemis through the SMECY project (grant 100230). Cover image: An Artist’s impression of a view of Saturn from its moon Titan. The image was taken from http://www.istockphoto.com and used with permission. ISBN 978-94-6186-487-1 Copyright © 2015 by H. Mushtaq All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means without the prior written permission of the copyright owner.
    [Show full text]
  • Statistical Problems Involving Permutations with Restricted Positions
    STATISTICAL PROBLEMS INVOLVING PERMUTATIONS WITH RESTRICTED POSITIONS PERSI DIACONIS, RONALD GRAHAM AND SUSAN P. HOLMES Stanford University, University of California and ATT, Stanford University and INRA-Biornetrie The rich world of permutation tests can be supplemented by a variety of applications where only some permutations are permitted. We consider two examples: testing in- dependence with truncated data and testing extra-sensory perception with feedback. We review relevant literature on permanents, rook polynomials and complexity. The statistical applications call for new limit theorems. We prove a few of these and offer an approach to the rest via Stein's method. Tools from the proof of van der Waerden's permanent conjecture are applied to prove a natural monotonicity conjecture. AMS subject classiήcations: 62G09, 62G10. Keywords and phrases: Permanents, rook polynomials, complexity, statistical test, Stein's method. 1 Introduction Definitive work on permutation testing by Willem van Zwet, his students and collaborators, has given us a rich collection of tools for probability and statistics. We have come upon a series of variations where randomization naturally takes place over a subset of all permutations. The present paper gives two examples of sets of permutations defined by restricting positions. Throughout, a permutation π is represented in two-line notation 1 2 3 ... n π(l) π(2) π(3) ••• τr(n) with π(i) referred to as the label at position i. The restrictions are specified by a zero-one matrix Aij of dimension n with Aij equal to one if and only if label j is permitted in position i. Let SA be the set of all permitted permutations.
    [Show full text]
  • Week 1: an Overview of Circuit Complexity 1 Welcome 2
    Topics in Circuit Complexity (CS354, Fall’11) Week 1: An Overview of Circuit Complexity Lecture Notes for 9/27 and 9/29 Ryan Williams 1 Welcome The area of circuit complexity has a long history, starting in the 1940’s. It is full of open problems and frontiers that seem insurmountable, yet the literature on circuit complexity is fairly large. There is much that we do know, although it is scattered across several textbooks and academic papers. I think now is a good time to look again at circuit complexity with fresh eyes, and try to see what can be done. 2 Preliminaries An n-bit Boolean function has domain f0; 1gn and co-domain f0; 1g. At a high level, the basic question asked in circuit complexity is: given a collection of “simple functions” and a target Boolean function f, how efficiently can f be computed (on all inputs) using the simple functions? Of course, efficiency can be measured in many ways. The most natural measure is that of the “size” of computation: how many copies of these simple functions are necessary to compute f? Let B be a set of Boolean functions, which we call a basis set. The fan-in of a function g 2 B is the number of inputs that g takes. (Typical choices are fan-in 2, or unbounded fan-in, meaning that g can take any number of inputs.) We define a circuit C with n inputs and size s over a basis B, as follows. C consists of a directed acyclic graph (DAG) of s + n + 2 nodes, with n sources and one sink (the sth node in some fixed topological order on the nodes).
    [Show full text]
  • Chapter 24 Conp, Self-Reductions
    Chapter 24 coNP, Self-Reductions CS 473: Fundamental Algorithms, Spring 2013 April 24, 2013 24.1 Complementation and Self-Reduction 24.2 Complementation 24.2.1 Recap 24.2.1.1 The class P (A) A language L (equivalently decision problem) is in the class P if there is a polynomial time algorithm A for deciding L; that is given a string x, A correctly decides if x 2 L and running time of A on x is polynomial in jxj, the length of x. 24.2.1.2 The class NP Two equivalent definitions: (A) Language L is in NP if there is a non-deterministic polynomial time algorithm A (Turing Machine) that decides L. (A) For x 2 L, A has some non-deterministic choice of moves that will make A accept x (B) For x 62 L, no choice of moves will make A accept x (B) L has an efficient certifier C(·; ·). (A) C is a polynomial time deterministic algorithm (B) For x 2 L there is a string y (proof) of length polynomial in jxj such that C(x; y) accepts (C) For x 62 L, no string y will make C(x; y) accept 1 24.2.1.3 Complementation Definition 24.2.1. Given a decision problem X, its complement X is the collection of all instances s such that s 62 L(X) Equivalently, in terms of languages: Definition 24.2.2. Given a language L over alphabet Σ, its complement L is the language Σ∗ n L. 24.2.1.4 Examples (A) PRIME = nfn j n is an integer and n is primeg o PRIME = n n is an integer and n is not a prime n o PRIME = COMPOSITE .
    [Show full text]
  • Design and Analysis of Algorithms Credits and Contact Hours: 3 Credits
    Course number and name: CS 07340: Design and Analysis of Algorithms Credits and contact hours: 3 credits. / 3 contact hours Faculty Coordinator: Andrea Lobo Text book, title, author, and year: The Design and Analysis of Algorithms, Anany Levitin, 2012. Specific course information Catalog description: In this course, students will learn to design and analyze efficient algorithms for sorting, searching, graphs, sets, matrices, and other applications. Students will also learn to recognize and prove NP- Completeness. Prerequisites: CS07210 Foundations of Computer Science and CS04222 Data Structures and Algorithms Type of Course: ☒ Required ☐ Elective ☐ Selected Elective Specific goals for the course 1. algorithm complexity. Students have analyzed the worst-case runtime complexity of algorithms including the quantification of resources required for computation of basic problems. o ABET (j) An ability to apply mathematical foundations, algorithmic principles, and computer science theory in the modeling and design of computer-based systems in a way that demonstrates comprehension of the tradeoffs involved in design choices 2. algorithm design. Students have applied multiple algorithm design strategies. o ABET (j) An ability to apply mathematical foundations, algorithmic principles, and computer science theory in the modeling and design of computer-based systems in a way that demonstrates comprehension of the tradeoffs involved in design choices 3. classic algorithms. Students have demonstrated understanding of algorithms for several well-known computer science problems o ABET (j) An ability to apply mathematical foundations, algorithmic principles, and computer science theory in the modeling and design of computer-based systems in a way that demonstrates comprehension of the tradeoffs involved in design choices and are able to implement these algorithms.
    [Show full text]
  • The Complexity Zoo
    The Complexity Zoo Scott Aaronson www.ScottAaronson.com LATEX Translation by Chris Bourke cbourke@cse.unl.edu 417 classes and counting 1 Contents 1 About This Document 3 2 Introductory Essay 4 2.1 Recommended Further Reading ......................... 4 2.2 Other Theory Compendia ............................ 5 2.3 Errors? ....................................... 5 3 Pronunciation Guide 6 4 Complexity Classes 10 5 Special Zoo Exhibit: Classes of Quantum States and Probability Distribu- tions 110 6 Acknowledgements 116 7 Bibliography 117 2 1 About This Document What is this? Well its a PDF version of the website www.ComplexityZoo.com typeset in LATEX using the complexity package. Well, what’s that? The original Complexity Zoo is a website created by Scott Aaronson which contains a (more or less) comprehensive list of Complexity Classes studied in the area of theoretical computer science known as Computa- tional Complexity. I took on the (mostly painless, thank god for regular expressions) task of translating the Zoo’s HTML code to LATEX for two reasons. First, as a regular Zoo patron, I thought, “what better way to honor such an endeavor than to spruce up the cages a bit and typeset them all in beautiful LATEX.” Second, I thought it would be a perfect project to develop complexity, a LATEX pack- age I’ve created that defines commands to typeset (almost) all of the complexity classes you’ll find here (along with some handy options that allow you to conveniently change the fonts with a single option parameters). To get the package, visit my own home page at http://www.cse.unl.edu/~cbourke/.
    [Show full text]