Algorithms and Computational Complexity: an Overview

Total Page:16

File Type:pdf, Size:1020Kb

Algorithms and Computational Complexity: an Overview Algorithms and Computational Complexity: an Overview Winter 2011 Larry Ruzzo Thanks to Paul Beame, James Lee, Kevin Wayne for some slides 1 goals Design of Algorithms – a taste design methods common or important types of problems analysis of algorithms - efficiency 2 goals Complexity & intractability – a taste solving problems in principle is not enough algorithms must be efficient some problems have no efficient solution NP-complete problems important & useful class of problems whose solutions (seemingly) cannot be found efficiently 3 complexity example Cryptography (e.g. RSA, SSL in browsers) Secret: p,q prime, say 512 bits each Public: n which equals p x q, 1024 bits In principle there is an algorithm that given n will find p and q:" try all 2512 possible p’s, but an astronomical number In practice no fast algorithm known for this problem (on non-quantum computers) security of RSA depends on this fact (and research in “quantum computing” is strongly driven by the possibility of changing this) 4 algorithms versus machines Moore’s Law and the exponential improvements in hardware... Ex: sparse linear equations over 25 years 10 orders of magnitude improvement! 5 algorithms or hardware? 107 25 years G.E. / CDC 3600 progress solving CDC 6600 G.E. = Gaussian Elimination 106 sparse linear CDC 7600 Cray 1 systems 105 Cray 2 Hardware " 104 alone: 4 orders Seconds Cray 3 (Est.) 103 of magnitude 102 101 Source: Sandia, via M. Schultz! 100 1960 1970 1980 1990 6 2000 algorithms or hardware? 107 25 years G.E. / CDC 3600 CDC 6600 G.E. = Gaussian Elimination progress solving SOR = Successive OverRelaxation 106 CG = Conjugate Gradient sparse linear CDC 7600 Cray 1 systems 105 Cray 2 Hardware " 104 alone: 4 orders Seconds Cray 3 (Est.) 103 of magnitude Sparse G.E. Gauss-Seidel 102 Software alone: 101 SOR 6 orders of CG magnitude Source: Sandia, via M. Schultz! 100 1960 1970 1980 1990 7 2000 algorithms or hardware? The " N-Body " Problem: in 30 years" 107 hardware" 1010 software Source: T.Quinn! 8 algorithms: a definition Procedure to accomplish a task or solve a well-specified problem Well-specified: know what all possible inputs look like and what output looks like given them “accomplish” via simple, well-defined steps Ex: sorting names (via comparison) Ex: checking for primality (via +, -, *, /, ≤) 9 algorithms: a sample problem Printed circuit-board company has a robot arm that solders components to the board Time: proportional to total distance the arm must move from initial rest position around the board and back to the initial position For each board design, find best order to do the soldering 10 printed circuit board 11 printed circuit board 12 more precise problem definition Input: Given a set S of n points in the plane Output: The shortest cycle tour that visits each point in the set S. Better known as “TSP” How might you solve it? 13 nearest neighbor heuristic Start at some point p0 heuristic:" Walk first to its " A rule of thumb, simplifica- nearest neighbor p tion, or educated guess that 1 reduces or limits the search Walk to the nearest for solutions in domains that unvisited neighbor p2, are difficult and poorly then nearest unvisited understood. May be good, but usually not guaranteed to give p3, … until all points have been visited the best or fastest solution. Then walk back to p0 14 nearest neighbor heuristic p1! p0! p6! 15 an input where nn works badly length ~ 84 16! 4! 1! .9! 2! 8! 16 p0! an input where nn works badly optimal soln for this example" length ~ 64 16! 4! 1! .9! 2! 8! 17 p0! revised heuristic – closest pairs first Repeatedly join the closest pair of points (such that result can still be part of a " single loop in the end. I.e., join " endpoints, but not points in middle, " ? of path segments already created.) How does this work on our bad example? 16! 4! 1! .9! 2! 8! 18 p0! a bad example for closest pair 1! 1.5! 1.5! 19 a bad example for closest pair 1! 6+√10 = 9.16 ! 1.5! 1.5! vs ! 8! 20 something that works “Brute Force Search”: For each of the n! = n(n-1)(n-2)…1 orderings of the points, check the length of the cycle; Keep the best one 21 two notes The two incorrect algorithms were greedy Often very natural & tempting ideas They make choices that look great “locally” (and never reconsider them) When greed works, the algorithms are typically efficient BUT: often does not work - you get boxed in Our correct alg avoids this, but is incredibly slow 20! is so large that checking one billion per second would take 2.4 billion seconds (around 70 years!) And growing: n! ~ √ 2 π n • (n/e)n ~ 2O(n log n) 22 the morals of the story Algorithms are important Many performance gains outstrip Moore’s law Simple problems can be hard Factoring, TSP, many others Simple ideas don’t always work Nearest neighbor, closest pair heuristics Simple algorithms can be very slow Brute-force factoring, TSP A point we hope to make: for some problems, even the best algorithms are slow 23 my plan A brief overview of the theory of algorithms Efficiency & asymptotic analysis Some scattered examples of simple problems where clever algorithms help A brief overview of the theory of intractability Especially NP-complete problems “Basics every educated CSE student should know” 24 computational complexity The complexity of an algorithm associates a number T(n), the worst-case time the algorithm takes, with each problem size n. Mathematically, T: N+ → R+ i.e.,T is a function mapping positive integers (problem sizes) to positive real numbers (number of steps). 26 computational complexity T(n)! ! Time Problem size ! 27 computational complexity: general goals Characterize growth rate of worst-case run time as a function of problem size, up to a constant factor Why not try to be more precise? Average-case, e.g., is hard to define, analyze Technological variations (computer, compiler, OS, …) easily 10x or more Being more precise is a ton of work A key question is “scale up”: if I can afford this today, how much longer will it take when my business is 2x larger? (E.g. today: cn2, next year: c(2n)2 = 4cn2 : 4 x longer.) 28 computational complexity T(n)! 2n log2n! ! Time n log2n! Problem size ! 29 asymptotic analysis & big-O Given two functions f and g: N→R, f(n) is O(g(n)) iff ∃ constant c > 0 so that f(n) is eventually always ≤ c g(n) Example: 10n2-16n+100 is O(n2) (and also O(n3)…) why?: 10n2-16n+100 ≤ 11n2 for all n ≥ 10 30 polynomial vs exponential For all r > 1 (no matter how small) " and all d > 0, (no matter how large)" nd = O(rn). 1.01n n100 In short, every exponential grows faster than every polynomial! 31 the complexity class P: polynomial time P: Running time O(nd) for some constant d " (d is independent of the input size n) Nice scaling property: there is a constant c s.t. doubling n, time increases only by a factor of c." (E.g., c ~ 2d) Contrast with exponential: For any constant c, there is a d such that n → n+d increases time by a factor of more than c. (E.g., 2n vs 2n+1) 32 polynomial vs exponential growth 22n! 22n 2n/10! 2n/10 1000n2 1000n2! 33 why it matters not only get very big, but do so abruptly, which likely yields erratic performance on small instances 34 another view of poly vs exp Next year's computer will be 2x faster. If I can solve problem of size n0 today, how large a problem can I solve in the same time next year? Complexity Increase E.g. T=1012 12 12 O(n) n0 → 2n0 10 → 2 x 10 2 6 6 O(n ) n0 → √2 n0 10 → 1.4 x 10 3 3 4 4 O(n ) n0 → √2 n0 10 → 1.25 x 10 n /10 2 n0 → n0+10 400 → 410 n 2 n0 → n0 +1 40 → 41 35 complexity summary Typical initial goal for algorithm analysis is to find an asymptotic upper bound on worst case running time as a function of problem size This is rarely the last word, but often helps separate good algorithms from blatantly poor ones - concentrate on the good ones! 36 why “polynomial”? Point is not that n2000 is a nice time bound, or that the differences among n and 2n and n2 are negligible. Rather, simple theoretical tools may not easily capture such differences, whereas exponentials are qualitatively different from polynomials, so more amenable to theoretical analysis. “My problem is in P” is a starting point for a more detailed analysis “My problem is not in P” may suggest that you need to shift to a more tractable variant, or otherwise readjust expectations 37 algorithm design techniques 39 algorithm design techniques We will survey two: Later: Dynamic programming Orderly solution of many smaller sub-problems, typically non-disjoint Can give exponential speedups compared to more brute- force approaches Today: Divide & Conquer Reduce problem to one or more sub-problems of the same type, typically disjoint Often gives significant, usually polynomial, speedup 40 algorithm design techniques Divide & Conquer Reduce problem to one or more sub-problems of the same type Each sub-problem’s size a fraction of the original Subproblem’s typically disjoint Often gives significant, usually polynomial, speedup Examples: Mergesort, Binary Search, Strassen’s Algorithm, Quicksort (roughly) 41 divide & conquer – the key idea Suppose we've already invented DumbSort, taking time n2 Try Just One Level of divide & conquer: DumbSort(first n/2 elements) DumbSort(last n/2 elements) Merge results D&C in a 2 2 2 Time: 2 (n/2) + n = n /2 + n << n nutshell Almost twice as fast! 42 d&c approach, cont.
Recommended publications
  • Computational Complexity and Intractability: an Introduction to the Theory of NP Chapter 9 2 Objectives
    1 Computational Complexity and Intractability: An Introduction to the Theory of NP Chapter 9 2 Objectives . Classify problems as tractable or intractable . Define decision problems . Define the class P . Define nondeterministic algorithms . Define the class NP . Define polynomial transformations . Define the class of NP-Complete 3 Input Size and Time Complexity . Time complexity of algorithms: . Polynomial time (efficient) vs. Exponential time (inefficient) f(n) n = 10 30 50 n 0.00001 sec 0.00003 sec 0.00005 sec n5 0.1 sec 24.3 sec 5.2 mins 2n 0.001 sec 17.9 mins 35.7 yrs 4 “Hard” and “Easy” Problems . “Easy” problems can be solved by polynomial time algorithms . Searching problem, sorting, Dijkstra’s algorithm, matrix multiplication, all pairs shortest path . “Hard” problems cannot be solved by polynomial time algorithms . 0/1 knapsack, traveling salesman . Sometimes the dividing line between “easy” and “hard” problems is a fine one. For example, . Find the shortest path in a graph from X to Y (easy) . Find the longest path (with no cycles) in a graph from X to Y (hard) 5 “Hard” and “Easy” Problems . Motivation: is it possible to efficiently solve “hard” problems? Efficiently solve means polynomial time solutions. Some problems have been proved that no efficient algorithms for them. For example, print all permutation of a number n. However, many problems we cannot prove there exists no efficient algorithms, and at the same time, we cannot find one either. 6 Traveling Salesperson Problem . No algorithm has ever been developed with a Worst-case time complexity better than exponential .
    [Show full text]
  • Average-Case Analysis of Algorithms and Data Structures L’Analyse En Moyenne Des Algorithmes Et Des Structures De DonnEes
    (page i) Average-Case Analysis of Algorithms and Data Structures L'analyse en moyenne des algorithmes et des structures de donn¶ees Je®rey Scott Vitter 1 and Philippe Flajolet 2 Abstract. This report is a contributed chapter to the Handbook of Theoretical Computer Science (North-Holland, 1990). Its aim is to describe the main mathematical methods and applications in the average-case analysis of algorithms and data structures. It comprises two parts: First, we present basic combinatorial enumerations based on symbolic methods and asymptotic methods with emphasis on complex analysis techniques (such as singularity analysis, saddle point, Mellin transforms). Next, we show how to apply these general methods to the analysis of sorting, searching, tree data structures, hashing, and dynamic algorithms. The emphasis is on algorithms for which exact \analytic models" can be derived. R¶esum¶e. Ce rapport est un chapitre qui para^³t dans le Handbook of Theoretical Com- puter Science (North-Holland, 1990). Son but est de d¶ecrire les principales m¶ethodes et applications de l'analyse de complexit¶e en moyenne des algorithmes. Il comprend deux parties. Tout d'abord, nous donnons une pr¶esentation des m¶ethodes de d¶enombrements combinatoires qui repose sur l'utilisation de m¶ethodes symboliques, ainsi que des tech- niques asymptotiques fond¶ees sur l'analyse complexe (analyse de singularit¶es, m¶ethode du col, transformation de Mellin). Ensuite, nous d¶ecrivons l'application de ces m¶ethodes g¶enerales a l'analyse du tri, de la recherche, de la manipulation d'arbres, du hachage et des algorithmes dynamiques.
    [Show full text]
  • Design and Analysis of Algorithms Credits and Contact Hours: 3 Credits
    Course number and name: CS 07340: Design and Analysis of Algorithms Credits and contact hours: 3 credits. / 3 contact hours Faculty Coordinator: Andrea Lobo Text book, title, author, and year: The Design and Analysis of Algorithms, Anany Levitin, 2012. Specific course information Catalog description: In this course, students will learn to design and analyze efficient algorithms for sorting, searching, graphs, sets, matrices, and other applications. Students will also learn to recognize and prove NP- Completeness. Prerequisites: CS07210 Foundations of Computer Science and CS04222 Data Structures and Algorithms Type of Course: ☒ Required ☐ Elective ☐ Selected Elective Specific goals for the course 1. algorithm complexity. Students have analyzed the worst-case runtime complexity of algorithms including the quantification of resources required for computation of basic problems. o ABET (j) An ability to apply mathematical foundations, algorithmic principles, and computer science theory in the modeling and design of computer-based systems in a way that demonstrates comprehension of the tradeoffs involved in design choices 2. algorithm design. Students have applied multiple algorithm design strategies. o ABET (j) An ability to apply mathematical foundations, algorithmic principles, and computer science theory in the modeling and design of computer-based systems in a way that demonstrates comprehension of the tradeoffs involved in design choices 3. classic algorithms. Students have demonstrated understanding of algorithms for several well-known computer science problems o ABET (j) An ability to apply mathematical foundations, algorithmic principles, and computer science theory in the modeling and design of computer-based systems in a way that demonstrates comprehension of the tradeoffs involved in design choices and are able to implement these algorithms.
    [Show full text]
  • Introduction to the Design and Analysis of Algorithms
    This page intentionally left blank Vice President and Editorial Director, ECS Marcia Horton Editor-in-Chief Michael Hirsch Acquisitions Editor Matt Goldstein Editorial Assistant Chelsea Bell Vice President, Marketing Patrice Jones Marketing Manager Yezan Alayan Senior Marketing Coordinator Kathryn Ferranti Marketing Assistant Emma Snider Vice President, Production Vince O’Brien Managing Editor Jeff Holcomb Production Project Manager Kayla Smith-Tarbox Senior Operations Supervisor Alan Fischer Manufacturing Buyer Lisa McDowell Art Director Anthony Gemmellaro Text Designer Sandra Rigney Cover Designer Anthony Gemmellaro Cover Illustration Jennifer Kohnke Media Editor Daniel Sandin Full-Service Project Management Windfall Software Composition Windfall Software, using ZzTEX Printer/Binder Courier Westford Cover Printer Courier Westford Text Font Times Ten Copyright © 2012, 2007, 2003 Pearson Education, Inc., publishing as Addison-Wesley. All rights reserved. Printed in the United States of America. This publication is protected by Copyright, and permission should be obtained from the publisher prior to any prohibited reproduction, storage in a retrieval system, or transmission in any form or by any means, electronic, mechanical, photocopying, recording, or likewise. To obtain permission(s) to use material from this work, please submit a written request to Pearson Education, Inc., Permissions Department, One Lake Street, Upper Saddle River, New Jersey 07458, or you may fax your request to 201-236-3290. This is the eBook of the printed book and may not include any media, Website access codes or print supplements that may come packaged with the bound book. Many of the designations by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and the publisher was aware of a trademark claim, the designations have been printed in initial caps or all caps.
    [Show full text]
  • A Short History of Computational Complexity
    The Computational Complexity Column by Lance FORTNOW NEC Laboratories America 4 Independence Way, Princeton, NJ 08540, USA [email protected] http://www.neci.nj.nec.com/homepages/fortnow/beatcs Every third year the Conference on Computational Complexity is held in Europe and this summer the University of Aarhus (Denmark) will host the meeting July 7-10. More details at the conference web page http://www.computationalcomplexity.org This month we present a historical view of computational complexity written by Steve Homer and myself. This is a preliminary version of a chapter to be included in an upcoming North-Holland Handbook of the History of Mathematical Logic edited by Dirk van Dalen, John Dawson and Aki Kanamori. A Short History of Computational Complexity Lance Fortnow1 Steve Homer2 NEC Research Institute Computer Science Department 4 Independence Way Boston University Princeton, NJ 08540 111 Cummington Street Boston, MA 02215 1 Introduction It all started with a machine. In 1936, Turing developed his theoretical com- putational model. He based his model on how he perceived mathematicians think. As digital computers were developed in the 40's and 50's, the Turing machine proved itself as the right theoretical model for computation. Quickly though we discovered that the basic Turing machine model fails to account for the amount of time or memory needed by a computer, a critical issue today but even more so in those early days of computing. The key idea to measure time and space as a function of the length of the input came in the early 1960's by Hartmanis and Stearns.
    [Show full text]
  • From Analysis of Algorithms to Analytic Combinatorics a Journey with Philippe Flajolet
    From Analysis of Algorithms to Analytic Combinatorics a journey with Philippe Flajolet Robert Sedgewick Princeton University This talk is dedicated to the memory of Philippe Flajolet Philippe Flajolet 1948-2011 From Analysis of Algorithms to Analytic Combinatorics 47. PF. Elements of a general theory of combinatorial structures. 1985. 69. PF. Mathematical methods in the analysis of algorithms and data structures. 1988. 70. PF L'analyse d'algorithmes ou le risque calculé. 1986. 72. PF. Random tree models in the analysis of algorithms. 1988. 88. PF and Michèle Soria. Gaussian limiting distributions for the number of components in combinatorial structures. 1990. 91. Jeffrey Scott Vitter and PF. Average-case analysis of algorithms and data structures. 1990. 95. PF and Michèle Soria. The cycle construction.1991. 97. PF. Analytic analysis of algorithms. 1992. 99. PF. Introduction à l'analyse d'algorithmes. 1992. 112. PF and Michèle Soria. General combinatorial schemas: Gaussian limit distributions and exponential tails. 1993. 130. RS and PF. An Introduction to the Analysis of Algorithms. 1996. 131. RS and PF. Introduction à l'analyse des algorithmes. 1996. 171. PF. Singular combinatorics. 2002. 175. Philippe Chassaing and PF. Hachage, arbres, chemins & graphes. 2003. 189. PF, Éric Fusy, Xavier Gourdon, Daniel Panario, and Nicolas Pouyanne. A hybrid of Darboux's method and singularity analysis in combinatorial asymptotics. 2006. 192. PF. Analytic combinatorics—a calculus of discrete structures. 2007. 201. PF and RS. Analytic Combinatorics. 2009. Analysis of Algorithms Pioneering research by Knuth put the study of the performance of computer programs on a scientific basis. “As soon as an Analytic Engine exists, it will necessarily guide the future course of the science.
    [Show full text]
  • Interactions of Computational Complexity Theory and Mathematics
    Interactions of Computational Complexity Theory and Mathematics Avi Wigderson October 22, 2017 Abstract [This paper is a (self contained) chapter in a new book on computational complexity theory, called Mathematics and Computation, whose draft is available at https://www.math.ias.edu/avi/book]. We survey some concrete interaction areas between computational complexity theory and different fields of mathematics. We hope to demonstrate here that hardly any area of modern mathematics is untouched by the computational connection (which in some cases is completely natural and in others may seem quite surprising). In my view, the breadth, depth, beauty and novelty of these connections is inspiring, and speaks to a great potential of future interactions (which indeed, are quickly expanding). We aim for variety. We give short, simple descriptions (without proofs or much technical detail) of ideas, motivations, results and connections; this will hopefully entice the reader to dig deeper. Each vignette focuses only on a single topic within a large mathematical filed. We cover the following: • Number Theory: Primality testing • Combinatorial Geometry: Point-line incidences • Operator Theory: The Kadison-Singer problem • Metric Geometry: Distortion of embeddings • Group Theory: Generation and random generation • Statistical Physics: Monte-Carlo Markov chains • Analysis and Probability: Noise stability • Lattice Theory: Short vectors • Invariant Theory: Actions on matrix tuples 1 1 introduction The Theory of Computation (ToC) lays out the mathematical foundations of computer science. I am often asked if ToC is a branch of Mathematics, or of Computer Science. The answer is easy: it is clearly both (and in fact, much more). Ever since Turing's 1936 definition of the Turing machine, we have had a formal mathematical model of computation that enables the rigorous mathematical study of computational tasks, algorithms to solve them, and the resources these require.
    [Show full text]
  • Computational Complexity: a Modern Approach
    i Computational Complexity: A Modern Approach Sanjeev Arora and Boaz Barak Princeton University http://www.cs.princeton.edu/theory/complexity/ [email protected] Not to be reproduced or distributed without the authors’ permission ii Chapter 10 Quantum Computation “Turning to quantum mechanics.... secret, secret, close the doors! we always have had a great deal of difficulty in understanding the world view that quantum mechanics represents ... It has not yet become obvious to me that there’s no real problem. I cannot define the real problem, therefore I suspect there’s no real problem, but I’m not sure there’s no real problem. So that’s why I like to investigate things.” Richard Feynman, 1964 “The only difference between a probabilistic classical world and the equations of the quantum world is that somehow or other it appears as if the probabilities would have to go negative..” Richard Feynman, in “Simulating physics with computers,” 1982 Quantum computing is a new computational model that may be physically realizable and may provide an exponential advantage over “classical” computational models such as prob- abilistic and deterministic Turing machines. In this chapter we survey the basic principles of quantum computation and some of the important algorithms in this model. One important reason to study quantum computers is that they pose a serious challenge to the strong Church-Turing thesis (see Section 1.6.3), which stipulates that every physi- cally reasonable computation device can be simulated by a Turing machine with at most polynomial slowdown. As we will see in Section 10.6, there is a polynomial-time algorithm for quantum computers to factor integers, whereas despite much effort, no such algorithm is known for deterministic or probabilistic Turing machines.
    [Show full text]
  • A Survey and Analysis of Algorithms for the Detection of Termination in a Distributed System
    Rochester Institute of Technology RIT Scholar Works Theses 1991 A Survey and analysis of algorithms for the detection of termination in a distributed system Lois Rixner Follow this and additional works at: https://scholarworks.rit.edu/theses Recommended Citation Rixner, Lois, "A Survey and analysis of algorithms for the detection of termination in a distributed system" (1991). Thesis. Rochester Institute of Technology. Accessed from This Thesis is brought to you for free and open access by RIT Scholar Works. It has been accepted for inclusion in Theses by an authorized administrator of RIT Scholar Works. For more information, please contact [email protected]. Rochester Institute of Technology Computer Science Department A Survey and Analysis of Algorithms for the Detection of Termination rna Distributed System by Lois R. Romer A thesis, submitted to The Faculty of the Computer Science Department, in partial fulfillment of the requirements for the degree of Master of Science in Computer Science. Approved by: Dr. Andrew Kitchen Dr. James Heliotis "22 Jit~ 1/ Dr. Peter Anderson Date Title of Thesis: A Survey and Analysis of Algorithms for the Detection of Termination in a Distributed System I, Lois R. Rixner hereby deny permission to the Wallace Memorial Library, of RITj to reproduce my thesis in whole or in part. Date: ~ oj'~, /f7t:J/ Abstract - This paper looks at algorithms for the detection of termination in a distributed system and analyzes them for effectiveness and efficiency. A survey is done of the pub lished algorithms for distributed termination and each is evaluated. Both centralized dis tributed systems and fully distributed systems are reviewed.
    [Show full text]
  • CS221: Algorithms and Data Structures Lecture #4 Sorting
    Today’s Outline CS221: Algorithms and • Categorizing/Comparing Sorting Algorithms Data Structures – PQSorts as examples Lecture #4 • MergeSort Sorting Things Out • QuickSort • More Comparisons Steve Wolfman • Complexity of Sorting 2014W1 1 2 Comparing our “PQSort” Categorizing Sorting Algorithms Algorithms • Computational complexity • Computational complexity – Average case behaviour: Why do we care? – Selection Sort: Always makes n passes with a – Worst/best case behaviour: Why do we care? How “triangular” shape. Best/worst/average case (n2) often do we re-sort sorted, reverse sorted, or “almost” – Insertion Sort: Always makes n passes, but if we’re sorted (k swaps from sorted where k << n) lists? lucky and search for the maximum from the right, only • Stability: What happens to elements with identical keys? constant work is needed on each pass. Best case (n); worst/average case: (n2) How much extra memory is used? • Memory Usage: – Heap Sort: Always makes n passes needing O(lg n) on each pass. Best/worst/average case: (n lg n). Note: best cases assume distinct elements. 3 With identical elements, Heap Sort can get (n) performance.4 Comparing our “PQSort” Insertion Sort Best Case Algorithms 0 1 2 3 4 5 6 7 8 9 10 11 12 13 • Stability 1 2 3 4 5 6 7 8 9 10 11 12 13 14 – Selection: Easily made stable (when building from the PQ left, prefer the left-most of identical “biggest” keys). 1 2 3 4 5 6 7 8 9 10 11 12 13 14 – Insertion: Easily made stable (when building from the left, find the rightmost slot for a new element).
    [Show full text]
  • Quantum Computational Complexity Theory Is to Un- Derstand the Implications of Quantum Physics to Computational Complexity Theory
    Quantum Computational Complexity John Watrous Institute for Quantum Computing and School of Computer Science University of Waterloo, Waterloo, Ontario, Canada. Article outline I. Definition of the subject and its importance II. Introduction III. The quantum circuit model IV. Polynomial-time quantum computations V. Quantum proofs VI. Quantum interactive proof systems VII. Other selected notions in quantum complexity VIII. Future directions IX. References Glossary Quantum circuit. A quantum circuit is an acyclic network of quantum gates connected by wires: the gates represent quantum operations and the wires represent the qubits on which these operations are performed. The quantum circuit model is the most commonly studied model of quantum computation. Quantum complexity class. A quantum complexity class is a collection of computational problems that are solvable by a cho- sen quantum computational model that obeys certain resource constraints. For example, BQP is the quantum complexity class of all decision problems that can be solved in polynomial time by a arXiv:0804.3401v1 [quant-ph] 21 Apr 2008 quantum computer. Quantum proof. A quantum proof is a quantum state that plays the role of a witness or certificate to a quan- tum computer that runs a verification procedure. The quantum complexity class QMA is defined by this notion: it includes all decision problems whose yes-instances are efficiently verifiable by means of quantum proofs. Quantum interactive proof system. A quantum interactive proof system is an interaction between a verifier and one or more provers, involving the processing and exchange of quantum information, whereby the provers attempt to convince the verifier of the answer to some computational problem.
    [Show full text]
  • Algorithm Analysis
    Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015 Analysis of Algorithms Input Algorithm Output © 2015 Goodrich and Tamassia Analysis of Algorithms 1 Scalability q Scientists often have to deal with differences in scale, from the microscopically small to the astronomically large. q Computer scientists must also deal with scale, but they deal with it primarily in terms of data volume rather than physical object size. q Scalability refers to the ability of a system to gracefully accommodate growing sizes of inputs or amounts of workload. © 2015 Goodrich and Tamassia Analysis of Algorithms 2 Application: Job Interviews q High technology companies tend to ask questions about algorithms and data structures during job interviews. q Algorithms questions can be short but often require critical thinking, creative insights, and subject knowledge. n All the “Applications” exercises in Chapter 1 of the Goodrich- Tamassia textbook are taken from reports of actual job interview questions. xkcd “Labyrinth Puzzle.” http://xkcd.com/246/ Used with permission under Creative Commons 2.5 License. © 2015 Goodrich and Tamassia Analysis of Algorithms 3 Algorithms and Data Structures q An algorithm is a step-by-step procedure for performing some task in a finite amount of time. n Typically, an algorithm takes input data and produces an output based upon it. Input Algorithm Output q A data structure is a systematic way of organizing and accessing data. © 2015 Goodrich and Tamassia Analysis of Algorithms 4 Running Time q Most algorithms transform best case input objects into output average case worst case objects.
    [Show full text]