Csc 308 (Algorithms and Complexity Analysis)

Total Page:16

File Type:pdf, Size:1020Kb

Csc 308 (Algorithms and Complexity Analysis) CSC 308 (ALGORITHMS AND COMPLEXITY ANALYSIS) 1 DISCLAIMER The contents of this document are intended for leaning purposes at the undergraduate level. The materials are from different sources including the internet and the contributor do not in any way claim authorship or ownership of them. 2 Introduction of the course This course applies design and analysis techniques to numeric and nonnumeric algorithms which act on data structures. Design is emphasized so that the student will be able to develop new algorithms. Analysis of algorithms is concerned with the resources an algorithm must use to reach a solution. Only theoretical techniques of analysis are covered. Topics include introduction to algorithm, Basic algorithmic analysis: Asymptotic analysis of Upper and average complexity bounds (Asymptotic complexity, sorting and searching); standard Complexity Classes Time and space tradeoffs in algorithms analysis recursive algorithms. Algorithmic Strategies: Fundamental computing algorithms: Numerical algorithms, sequential and binary search algorithms; sorting algorithms, Binary Search trees, Hash tables, graphs & its representation. 3 Objectives By the end of the semester, the student should be able to: (1) describe algorithm and types of algorithms (2) describe and recognize the use of analysis of algorithms such as asymptotic analysis of upper and average complexity bounds (3) recognize the use of different types of algorithms such as sorting, searching, recursive algorithms, etc, and use these algorithms / methods in solving problems (4) Determine asymptotic growth rates for algorithms (5) apply graphs and its theorem/representation to algorithm analysis 4 Course Overview Basic algorithmic analysis: Asymptotic analysis of Upper and average complexity bounds; standard Complexity Classes Time and space tradeoffs in algorithms analysis recursive algorithms. Algorithmic Strategies: Fundamental computing algorithms: Numerical algorithms, sequential and binary search algorithms; sorting algorithms, Binary Search tress, Hash tables, graphs & its representation. 5 Course Plan Week 1: Introduction, basic algorithm analysis, algorithm complexity, comparing algorithm Week 2: Types of algorithms Week 3: Asymptotic Complexity, Asymptotic analysis of Upper and average complexity bounds, Week 4: Standard Complexity Classes Time and space tradeoffs in algorithms analysis Week 5: Revision and 1st test Week 6: searching (binary search) and sorting algorithms Week 7: Recursive algorithms. Week 8: other fundamental computing algorithms: Numerical algorithms, sequential algorithms, Week 9: Binary Search tress, Hash tables, Week 10: Graphs & its representation. Week 11: revision and 2nd test 6 Lecture One Introduction It is very common for beginning computer science students to compare their programs with one another. You may also have noticed that it is common for computer programs to look very similar, especially the simple ones. An interesting question often arises. When two programs solve the same problem but look different, is one program better than the other? In order to answer this question, we need to remember that there is an important difference between a program and the underlying algorithm that the program is representing. As we stated in Chapter 1, an algorithm is a generic, step-by-step list of instructions for solving a problem. It is a method for solving any instance of the problem such that given a particular input, the algorithm produces the desired result. A program, on the other hand, is an algorithm that has been encoded into some programming language. There may be many programs for the same algorithm, depending on the programmer and the programming language being used. 7 Algorithms What is an algorithm? An algorithm is a finite set of precise instructions for performing a computation or for solving a problem. It can also be defined as step-by-step instructions to accomplish a task or solve a problem Format & Detail level depends on user 8 Algorithms Properties of algorithms: • Input from a specified set, • Output from a specified set (solution), • Definiteness of every step in the computation, • Correctness of output for every possible input, • Finiteness of the number of calculation steps, • Effectiveness of each calculation step and • Generality for a class of problems. 9 Algorithm Examples We will use a pseudocode to specify algorithms, which slightly reminds us of Basic and Pascal. Example: an algorithm that finds the maximum element in a finite sequence procedure max(a1, a2, …, an: integers) max := a1 for i := 2 to n if max < ai then max := ai {max is the largest element} 10 Algorithm Examples Another example: a linear search algorithm, that is, an algorithm that linearly searches a sequence for a particular element. procedure linear_search(x: integer; a1, a2, …, an: integers) i := 1 while (i n and x ai) i := i + 1 if i n then location := i else location := 0 {location is the subscript of the term that equals x, or is zero if x is not found} 11 Analysis of Algorithms In computer science, the analysis of algorithms is the determination of the computational complexity of algorithms, that is the amount of time, storage and/or other resources necessary to execute them. Usually, this involves determining a function that relates the length of an algorithm's input to the number of steps it takes (its time complexity) or the number of storage locations it uses (its space complexity). An algorithm is said to be efficient when this function's values are small. Since different inputs of the same length may cause the algorithm to have different behavior, the function describing its performance is usually an upper bound on the actual performance, determined from the worst case inputs to the algorithm. 12 The term "analysis of algorithms" was coined by Donald Knuth. Algorithm analysis is an important part of a broader computational complexity theory, which provides theoretical estimates for the resources needed by any algorithm which solves a given computational problem. These estimates provide an insight into reasonable directions of search for efficient algorithms. 13 Efficiency of Algorithms Different algorithms for a single task may have different efficiency -Must be able to evaluate efficiency (speed) of computer algorithms 14 Complexity = Speed = Efficiency Complexity is the number of basic operations required by an algorithm Will 2 algorithms with same complexity take the same actual amount of time to run? 15 Complexity Examples Do 10 times Read x, y, z Read x, y, z x = y + z x = y + z Print x Print x 16 Complexity Examples Do 10 times Do n times Read x, y, z Read x, y, z x = y + z x = y + z Print x Print x 17 Complexity In general, we are not so much interested in the time and space complexity for small inputs. For example, while the difference in time complexity between linear and binary search is meaningless for a sequence with n = 10, it is gigantic for n = 230. 18 Complexity For example, let us assume two algorithms A and B that solve the same class of problems. The time complexity of A is 5,000n, the one for B is 1.1n for an input with n elements. For n = 10, A requires 50,000 steps, but B only 3, so B seems to be superior to A. For n = 1000, however, A requires 5,000,000 steps, while B requires 2.51041 steps. 19 Complexity Comparison: time complexity of algorithms A and B Input Size Algorithm A Algorithm B n 5,000n 1.1n 10 50,000 3 100 500,000 13,781 1,000 5,000,000 2.51041 1,000,000 5109 4.81041392 20 Complexity This means that algorithm B cannot be used for large inputs, while algorithm A is still feasible. So what is important is the growth of the complexity functions. The growth of time and space complexity with increasing input size n is a suitable measure for the comparison of algorithms. 21 Comparing Algorithms Want to choose the “best” algorithm for tasks Generally, best = fastest But there are other considerations – Hardware – Size of data set – Data Structure But a standard measurement is needed for this!!! 22 The Need for Analysis Algorithms are often quite different from one another, though the objective of these algorithms are the same. For example, we know that a set of numbers can be sorted using different algorithms. Number of comparisons performed by one algorithm may vary with others for the same input. Hence, time complexity of those algorithms may differ. At the same time, we need to calculate the memory space required by each algorithm. 23 Analysis of algorithm is the process of analyzing the problem-solving capability of the algorithm in terms of the time and size required (the size of memory for storage while implementation). However, the main concern of analysis of algorithms is the required time or performance. Generally, we perform the following types of analysis − • Worst-case − The maximum number of steps taken on any instance of size a. • Best-case − The minimum number of steps taken on any instance of size a. • Average case − An average number of steps taken on any instance of size a. 24 To solve a problem, we need to consider time as well as space complexity as the program may run on a system where memory is limited but adequate space is available or may be vice-versa. In this context, if we compare bubble sort and merge sort. Bubble sort does not require additional memory, but merge sort requires additional space. Though time complexity of bubble sort is higher compared to merge sort, we may need to apply bubble sort if the program needs to run in an environment, where memory is very limited. 25 Bubble Sort def bubbleSort(alist): for passnum in range(len(alist)-1,0,-1): for i in range(passnum): if alist[i]>alist[i+1]: temp = alist[i] alist[i] = alist[i+1] alist[i+1] = temp alist = [54,26,93,17,77,31,44,55,20] bubbleSort(alist) print(alist) 26 Merge Sort Conceptually, a merge sort works as follows: • Divide the unsorted list into n sublists, each containing 1 element (a list of 1 element is considered sorted).
Recommended publications
  • Computational Complexity and Intractability: an Introduction to the Theory of NP Chapter 9 2 Objectives
    1 Computational Complexity and Intractability: An Introduction to the Theory of NP Chapter 9 2 Objectives . Classify problems as tractable or intractable . Define decision problems . Define the class P . Define nondeterministic algorithms . Define the class NP . Define polynomial transformations . Define the class of NP-Complete 3 Input Size and Time Complexity . Time complexity of algorithms: . Polynomial time (efficient) vs. Exponential time (inefficient) f(n) n = 10 30 50 n 0.00001 sec 0.00003 sec 0.00005 sec n5 0.1 sec 24.3 sec 5.2 mins 2n 0.001 sec 17.9 mins 35.7 yrs 4 “Hard” and “Easy” Problems . “Easy” problems can be solved by polynomial time algorithms . Searching problem, sorting, Dijkstra’s algorithm, matrix multiplication, all pairs shortest path . “Hard” problems cannot be solved by polynomial time algorithms . 0/1 knapsack, traveling salesman . Sometimes the dividing line between “easy” and “hard” problems is a fine one. For example, . Find the shortest path in a graph from X to Y (easy) . Find the longest path (with no cycles) in a graph from X to Y (hard) 5 “Hard” and “Easy” Problems . Motivation: is it possible to efficiently solve “hard” problems? Efficiently solve means polynomial time solutions. Some problems have been proved that no efficient algorithms for them. For example, print all permutation of a number n. However, many problems we cannot prove there exists no efficient algorithms, and at the same time, we cannot find one either. 6 Traveling Salesperson Problem . No algorithm has ever been developed with a Worst-case time complexity better than exponential .
    [Show full text]
  • Average-Case Analysis of Algorithms and Data Structures L’Analyse En Moyenne Des Algorithmes Et Des Structures De DonnEes
    (page i) Average-Case Analysis of Algorithms and Data Structures L'analyse en moyenne des algorithmes et des structures de donn¶ees Je®rey Scott Vitter 1 and Philippe Flajolet 2 Abstract. This report is a contributed chapter to the Handbook of Theoretical Computer Science (North-Holland, 1990). Its aim is to describe the main mathematical methods and applications in the average-case analysis of algorithms and data structures. It comprises two parts: First, we present basic combinatorial enumerations based on symbolic methods and asymptotic methods with emphasis on complex analysis techniques (such as singularity analysis, saddle point, Mellin transforms). Next, we show how to apply these general methods to the analysis of sorting, searching, tree data structures, hashing, and dynamic algorithms. The emphasis is on algorithms for which exact \analytic models" can be derived. R¶esum¶e. Ce rapport est un chapitre qui para^³t dans le Handbook of Theoretical Com- puter Science (North-Holland, 1990). Son but est de d¶ecrire les principales m¶ethodes et applications de l'analyse de complexit¶e en moyenne des algorithmes. Il comprend deux parties. Tout d'abord, nous donnons une pr¶esentation des m¶ethodes de d¶enombrements combinatoires qui repose sur l'utilisation de m¶ethodes symboliques, ainsi que des tech- niques asymptotiques fond¶ees sur l'analyse complexe (analyse de singularit¶es, m¶ethode du col, transformation de Mellin). Ensuite, nous d¶ecrivons l'application de ces m¶ethodes g¶enerales a l'analyse du tri, de la recherche, de la manipulation d'arbres, du hachage et des algorithmes dynamiques.
    [Show full text]
  • Design and Analysis of Algorithms Credits and Contact Hours: 3 Credits
    Course number and name: CS 07340: Design and Analysis of Algorithms Credits and contact hours: 3 credits. / 3 contact hours Faculty Coordinator: Andrea Lobo Text book, title, author, and year: The Design and Analysis of Algorithms, Anany Levitin, 2012. Specific course information Catalog description: In this course, students will learn to design and analyze efficient algorithms for sorting, searching, graphs, sets, matrices, and other applications. Students will also learn to recognize and prove NP- Completeness. Prerequisites: CS07210 Foundations of Computer Science and CS04222 Data Structures and Algorithms Type of Course: ☒ Required ☐ Elective ☐ Selected Elective Specific goals for the course 1. algorithm complexity. Students have analyzed the worst-case runtime complexity of algorithms including the quantification of resources required for computation of basic problems. o ABET (j) An ability to apply mathematical foundations, algorithmic principles, and computer science theory in the modeling and design of computer-based systems in a way that demonstrates comprehension of the tradeoffs involved in design choices 2. algorithm design. Students have applied multiple algorithm design strategies. o ABET (j) An ability to apply mathematical foundations, algorithmic principles, and computer science theory in the modeling and design of computer-based systems in a way that demonstrates comprehension of the tradeoffs involved in design choices 3. classic algorithms. Students have demonstrated understanding of algorithms for several well-known computer science problems o ABET (j) An ability to apply mathematical foundations, algorithmic principles, and computer science theory in the modeling and design of computer-based systems in a way that demonstrates comprehension of the tradeoffs involved in design choices and are able to implement these algorithms.
    [Show full text]
  • Introduction to the Design and Analysis of Algorithms
    This page intentionally left blank Vice President and Editorial Director, ECS Marcia Horton Editor-in-Chief Michael Hirsch Acquisitions Editor Matt Goldstein Editorial Assistant Chelsea Bell Vice President, Marketing Patrice Jones Marketing Manager Yezan Alayan Senior Marketing Coordinator Kathryn Ferranti Marketing Assistant Emma Snider Vice President, Production Vince O’Brien Managing Editor Jeff Holcomb Production Project Manager Kayla Smith-Tarbox Senior Operations Supervisor Alan Fischer Manufacturing Buyer Lisa McDowell Art Director Anthony Gemmellaro Text Designer Sandra Rigney Cover Designer Anthony Gemmellaro Cover Illustration Jennifer Kohnke Media Editor Daniel Sandin Full-Service Project Management Windfall Software Composition Windfall Software, using ZzTEX Printer/Binder Courier Westford Cover Printer Courier Westford Text Font Times Ten Copyright © 2012, 2007, 2003 Pearson Education, Inc., publishing as Addison-Wesley. All rights reserved. Printed in the United States of America. This publication is protected by Copyright, and permission should be obtained from the publisher prior to any prohibited reproduction, storage in a retrieval system, or transmission in any form or by any means, electronic, mechanical, photocopying, recording, or likewise. To obtain permission(s) to use material from this work, please submit a written request to Pearson Education, Inc., Permissions Department, One Lake Street, Upper Saddle River, New Jersey 07458, or you may fax your request to 201-236-3290. This is the eBook of the printed book and may not include any media, Website access codes or print supplements that may come packaged with the bound book. Many of the designations by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and the publisher was aware of a trademark claim, the designations have been printed in initial caps or all caps.
    [Show full text]
  • Algorithms and Computational Complexity: an Overview
    Algorithms and Computational Complexity: an Overview Winter 2011 Larry Ruzzo Thanks to Paul Beame, James Lee, Kevin Wayne for some slides 1 goals Design of Algorithms – a taste design methods common or important types of problems analysis of algorithms - efficiency 2 goals Complexity & intractability – a taste solving problems in principle is not enough algorithms must be efficient some problems have no efficient solution NP-complete problems important & useful class of problems whose solutions (seemingly) cannot be found efficiently 3 complexity example Cryptography (e.g. RSA, SSL in browsers) Secret: p,q prime, say 512 bits each Public: n which equals p x q, 1024 bits In principle there is an algorithm that given n will find p and q:" try all 2512 possible p’s, but an astronomical number In practice no fast algorithm known for this problem (on non-quantum computers) security of RSA depends on this fact (and research in “quantum computing” is strongly driven by the possibility of changing this) 4 algorithms versus machines Moore’s Law and the exponential improvements in hardware... Ex: sparse linear equations over 25 years 10 orders of magnitude improvement! 5 algorithms or hardware? 107 25 years G.E. / CDC 3600 progress solving CDC 6600 G.E. = Gaussian Elimination 106 sparse linear CDC 7600 Cray 1 systems 105 Cray 2 Hardware " 104 alone: 4 orders Seconds Cray 3 (Est.) 103 of magnitude 102 101 Source: Sandia, via M. Schultz! 100 1960 1970 1980 1990 6 2000 algorithms or hardware? 107 25 years G.E. / CDC 3600 CDC 6600 G.E. = Gaussian Elimination progress solving SOR = Successive OverRelaxation 106 CG = Conjugate Gradient sparse linear CDC 7600 Cray 1 systems 105 Cray 2 Hardware " 104 alone: 4 orders Seconds Cray 3 (Est.) 103 of magnitude Sparse G.E.
    [Show full text]
  • A Short History of Computational Complexity
    The Computational Complexity Column by Lance FORTNOW NEC Laboratories America 4 Independence Way, Princeton, NJ 08540, USA [email protected] http://www.neci.nj.nec.com/homepages/fortnow/beatcs Every third year the Conference on Computational Complexity is held in Europe and this summer the University of Aarhus (Denmark) will host the meeting July 7-10. More details at the conference web page http://www.computationalcomplexity.org This month we present a historical view of computational complexity written by Steve Homer and myself. This is a preliminary version of a chapter to be included in an upcoming North-Holland Handbook of the History of Mathematical Logic edited by Dirk van Dalen, John Dawson and Aki Kanamori. A Short History of Computational Complexity Lance Fortnow1 Steve Homer2 NEC Research Institute Computer Science Department 4 Independence Way Boston University Princeton, NJ 08540 111 Cummington Street Boston, MA 02215 1 Introduction It all started with a machine. In 1936, Turing developed his theoretical com- putational model. He based his model on how he perceived mathematicians think. As digital computers were developed in the 40's and 50's, the Turing machine proved itself as the right theoretical model for computation. Quickly though we discovered that the basic Turing machine model fails to account for the amount of time or memory needed by a computer, a critical issue today but even more so in those early days of computing. The key idea to measure time and space as a function of the length of the input came in the early 1960's by Hartmanis and Stearns.
    [Show full text]
  • From Analysis of Algorithms to Analytic Combinatorics a Journey with Philippe Flajolet
    From Analysis of Algorithms to Analytic Combinatorics a journey with Philippe Flajolet Robert Sedgewick Princeton University This talk is dedicated to the memory of Philippe Flajolet Philippe Flajolet 1948-2011 From Analysis of Algorithms to Analytic Combinatorics 47. PF. Elements of a general theory of combinatorial structures. 1985. 69. PF. Mathematical methods in the analysis of algorithms and data structures. 1988. 70. PF L'analyse d'algorithmes ou le risque calculé. 1986. 72. PF. Random tree models in the analysis of algorithms. 1988. 88. PF and Michèle Soria. Gaussian limiting distributions for the number of components in combinatorial structures. 1990. 91. Jeffrey Scott Vitter and PF. Average-case analysis of algorithms and data structures. 1990. 95. PF and Michèle Soria. The cycle construction.1991. 97. PF. Analytic analysis of algorithms. 1992. 99. PF. Introduction à l'analyse d'algorithmes. 1992. 112. PF and Michèle Soria. General combinatorial schemas: Gaussian limit distributions and exponential tails. 1993. 130. RS and PF. An Introduction to the Analysis of Algorithms. 1996. 131. RS and PF. Introduction à l'analyse des algorithmes. 1996. 171. PF. Singular combinatorics. 2002. 175. Philippe Chassaing and PF. Hachage, arbres, chemins & graphes. 2003. 189. PF, Éric Fusy, Xavier Gourdon, Daniel Panario, and Nicolas Pouyanne. A hybrid of Darboux's method and singularity analysis in combinatorial asymptotics. 2006. 192. PF. Analytic combinatorics—a calculus of discrete structures. 2007. 201. PF and RS. Analytic Combinatorics. 2009. Analysis of Algorithms Pioneering research by Knuth put the study of the performance of computer programs on a scientific basis. “As soon as an Analytic Engine exists, it will necessarily guide the future course of the science.
    [Show full text]
  • Interactions of Computational Complexity Theory and Mathematics
    Interactions of Computational Complexity Theory and Mathematics Avi Wigderson October 22, 2017 Abstract [This paper is a (self contained) chapter in a new book on computational complexity theory, called Mathematics and Computation, whose draft is available at https://www.math.ias.edu/avi/book]. We survey some concrete interaction areas between computational complexity theory and different fields of mathematics. We hope to demonstrate here that hardly any area of modern mathematics is untouched by the computational connection (which in some cases is completely natural and in others may seem quite surprising). In my view, the breadth, depth, beauty and novelty of these connections is inspiring, and speaks to a great potential of future interactions (which indeed, are quickly expanding). We aim for variety. We give short, simple descriptions (without proofs or much technical detail) of ideas, motivations, results and connections; this will hopefully entice the reader to dig deeper. Each vignette focuses only on a single topic within a large mathematical filed. We cover the following: • Number Theory: Primality testing • Combinatorial Geometry: Point-line incidences • Operator Theory: The Kadison-Singer problem • Metric Geometry: Distortion of embeddings • Group Theory: Generation and random generation • Statistical Physics: Monte-Carlo Markov chains • Analysis and Probability: Noise stability • Lattice Theory: Short vectors • Invariant Theory: Actions on matrix tuples 1 1 introduction The Theory of Computation (ToC) lays out the mathematical foundations of computer science. I am often asked if ToC is a branch of Mathematics, or of Computer Science. The answer is easy: it is clearly both (and in fact, much more). Ever since Turing's 1936 definition of the Turing machine, we have had a formal mathematical model of computation that enables the rigorous mathematical study of computational tasks, algorithms to solve them, and the resources these require.
    [Show full text]
  • Computational Complexity: a Modern Approach
    i Computational Complexity: A Modern Approach Sanjeev Arora and Boaz Barak Princeton University http://www.cs.princeton.edu/theory/complexity/ [email protected] Not to be reproduced or distributed without the authors’ permission ii Chapter 10 Quantum Computation “Turning to quantum mechanics.... secret, secret, close the doors! we always have had a great deal of difficulty in understanding the world view that quantum mechanics represents ... It has not yet become obvious to me that there’s no real problem. I cannot define the real problem, therefore I suspect there’s no real problem, but I’m not sure there’s no real problem. So that’s why I like to investigate things.” Richard Feynman, 1964 “The only difference between a probabilistic classical world and the equations of the quantum world is that somehow or other it appears as if the probabilities would have to go negative..” Richard Feynman, in “Simulating physics with computers,” 1982 Quantum computing is a new computational model that may be physically realizable and may provide an exponential advantage over “classical” computational models such as prob- abilistic and deterministic Turing machines. In this chapter we survey the basic principles of quantum computation and some of the important algorithms in this model. One important reason to study quantum computers is that they pose a serious challenge to the strong Church-Turing thesis (see Section 1.6.3), which stipulates that every physi- cally reasonable computation device can be simulated by a Turing machine with at most polynomial slowdown. As we will see in Section 10.6, there is a polynomial-time algorithm for quantum computers to factor integers, whereas despite much effort, no such algorithm is known for deterministic or probabilistic Turing machines.
    [Show full text]
  • A Survey and Analysis of Algorithms for the Detection of Termination in a Distributed System
    Rochester Institute of Technology RIT Scholar Works Theses 1991 A Survey and analysis of algorithms for the detection of termination in a distributed system Lois Rixner Follow this and additional works at: https://scholarworks.rit.edu/theses Recommended Citation Rixner, Lois, "A Survey and analysis of algorithms for the detection of termination in a distributed system" (1991). Thesis. Rochester Institute of Technology. Accessed from This Thesis is brought to you for free and open access by RIT Scholar Works. It has been accepted for inclusion in Theses by an authorized administrator of RIT Scholar Works. For more information, please contact [email protected]. Rochester Institute of Technology Computer Science Department A Survey and Analysis of Algorithms for the Detection of Termination rna Distributed System by Lois R. Romer A thesis, submitted to The Faculty of the Computer Science Department, in partial fulfillment of the requirements for the degree of Master of Science in Computer Science. Approved by: Dr. Andrew Kitchen Dr. James Heliotis "22 Jit~ 1/ Dr. Peter Anderson Date Title of Thesis: A Survey and Analysis of Algorithms for the Detection of Termination in a Distributed System I, Lois R. Rixner hereby deny permission to the Wallace Memorial Library, of RITj to reproduce my thesis in whole or in part. Date: ~ oj'~, /f7t:J/ Abstract - This paper looks at algorithms for the detection of termination in a distributed system and analyzes them for effectiveness and efficiency. A survey is done of the pub lished algorithms for distributed termination and each is evaluated. Both centralized dis tributed systems and fully distributed systems are reviewed.
    [Show full text]
  • CS221: Algorithms and Data Structures Lecture #4 Sorting
    Today’s Outline CS221: Algorithms and • Categorizing/Comparing Sorting Algorithms Data Structures – PQSorts as examples Lecture #4 • MergeSort Sorting Things Out • QuickSort • More Comparisons Steve Wolfman • Complexity of Sorting 2014W1 1 2 Comparing our “PQSort” Categorizing Sorting Algorithms Algorithms • Computational complexity • Computational complexity – Average case behaviour: Why do we care? – Selection Sort: Always makes n passes with a – Worst/best case behaviour: Why do we care? How “triangular” shape. Best/worst/average case (n2) often do we re-sort sorted, reverse sorted, or “almost” – Insertion Sort: Always makes n passes, but if we’re sorted (k swaps from sorted where k << n) lists? lucky and search for the maximum from the right, only • Stability: What happens to elements with identical keys? constant work is needed on each pass. Best case (n); worst/average case: (n2) How much extra memory is used? • Memory Usage: – Heap Sort: Always makes n passes needing O(lg n) on each pass. Best/worst/average case: (n lg n). Note: best cases assume distinct elements. 3 With identical elements, Heap Sort can get (n) performance.4 Comparing our “PQSort” Insertion Sort Best Case Algorithms 0 1 2 3 4 5 6 7 8 9 10 11 12 13 • Stability 1 2 3 4 5 6 7 8 9 10 11 12 13 14 – Selection: Easily made stable (when building from the PQ left, prefer the left-most of identical “biggest” keys). 1 2 3 4 5 6 7 8 9 10 11 12 13 14 – Insertion: Easily made stable (when building from the left, find the rightmost slot for a new element).
    [Show full text]
  • Quantum Computational Complexity Theory Is to Un- Derstand the Implications of Quantum Physics to Computational Complexity Theory
    Quantum Computational Complexity John Watrous Institute for Quantum Computing and School of Computer Science University of Waterloo, Waterloo, Ontario, Canada. Article outline I. Definition of the subject and its importance II. Introduction III. The quantum circuit model IV. Polynomial-time quantum computations V. Quantum proofs VI. Quantum interactive proof systems VII. Other selected notions in quantum complexity VIII. Future directions IX. References Glossary Quantum circuit. A quantum circuit is an acyclic network of quantum gates connected by wires: the gates represent quantum operations and the wires represent the qubits on which these operations are performed. The quantum circuit model is the most commonly studied model of quantum computation. Quantum complexity class. A quantum complexity class is a collection of computational problems that are solvable by a cho- sen quantum computational model that obeys certain resource constraints. For example, BQP is the quantum complexity class of all decision problems that can be solved in polynomial time by a arXiv:0804.3401v1 [quant-ph] 21 Apr 2008 quantum computer. Quantum proof. A quantum proof is a quantum state that plays the role of a witness or certificate to a quan- tum computer that runs a verification procedure. The quantum complexity class QMA is defined by this notion: it includes all decision problems whose yes-instances are efficiently verifiable by means of quantum proofs. Quantum interactive proof system. A quantum interactive proof system is an interaction between a verifier and one or more provers, involving the processing and exchange of quantum information, whereby the provers attempt to convince the verifier of the answer to some computational problem.
    [Show full text]