Math 3012 Lecture 8 - Complexity and Problem Size

Total Page:16

File Type:pdf, Size:1020Kb

Math 3012 Lecture 8 - Complexity and Problem Size Math 3012 Lecture 8 - Complexity and problem size Lu´ıs Pereira Georgia Tech September 7, 2018 Complexity and Problem/Input size Terminology We say that a problem has size n if the problem data can be described using n pieces of information where each packet can be read in a constant amount of time. Basic example Problem data is list of n numbers below 1000000. Non example Problem data is list of n numbers below 1000n. Problem size is n log2(1000n) = n(log2 1000 + log2 n) ≈ n log2 n. Running Time Terminology Suppose that for an input of size n an algorithm requires f (n) steps/operations. We call f (n) the running time of the algorithm. Remark Typically one can’t determine f (n) precisely, but one can give a reasonable estimate. Question Giving two algorithms for the same task with running times f (n) and g(n), how does one compare their running times? Some common increasing functions In practice f (n) → ∞ as n increases. Some possible such functions are √ log∗ n log log n log n n0.1 n n n log n n1.1 n2 n3 nlog n 2n √ n 10n (log n)n ( n)n nn 22n 222 Big-Oh notation Definition Given two (positive) functions f (n) and g(n) we write f = O(g) or f (n) = O(g(n)) if there is a constant C such that f (n) ≤ C · g(n) Informally “f is never much bigger than g” Remark If there is integer M and constant C such that f (n) ≤ C · g(n) for n ≥ M then there is some other constant Ce such that f (n) ≤ Ce · g(n) for all n In words The condition f = O(g) only depends on large n. Big-Oh notation: example Example Suppose there are 3 algorithms for the same task and 3 I algorithm 1 has running time f1(n) = n + 3 log n 3 2 I algorithm 2 has running time f2(n) = 2n + n 2 I algorithm 3 has running time f3(n) = 200n How do these running times compare? Answer For small n it is f1(n) < f2(n) < f3(n). For large n one has f2(n) ≈ 2f1(n) so f1 = O(f2) and f2 = O(f1) Further, f3(n) is eventually much smaller than f1(n) and f2(n) so f3 = O(f1) and f3 = O(f2) but not f1 = O(f3) nor f2 = O(f3). Little-Oh notation Definition Given two (positive) functions f (n) and g(n) we write f = o(g) or f (n) = o(g(n)) if f (n) lim = 0 n→∞ g(n) Informally “f is eventually much smaller than g” Example For 3 3 2 2 f1(n) = n + 3 log n f2(n) = 2n + n f3(n) = 200n one has f3 = o(f1) f3 = o(f2) Common increasing functions revisited In the previous list, for any consecutive f (n), g(n) it is f = o(g). √ log∗ n ≺ log log n ≺ log n ≺ n0.1 ≺ n ≺ n ≺ ≺ n log n ≺ n1.1 ≺ n2 ≺ n3 ≺ nlog n ≺ 2n ≺ √ n ≺ 10n ≺ (log n)n ≺ ( n)n ≺ nn ≺ 22n ≺ 222 Alternative notation (warning: not commonly used) I f g means f = O(g) I f ≺ g means f = o(g) Piazza poll Question Consider the following running time functions: n I f1(n) = 2 + 20 log n 3 n I f2(n) = n + 2 4 5 I f3(n) = n + n Choose the option with two correct statements. Answers (A) f1 = o(f2) and f1 = O(f3) (B) f1 = o(f2) and f3 = O(f1) (C) f2 = O(f1) and f1 = O(f3) (D) f2 = O(f1) and f3 = O(f1) Some motivating problems (1) Goal: Determining how difficult a given problem is. Given a list S of numbers, consider the following problems: I) What is the largest integer in S? II) If a is the first number in S, are there integers b, c in S such that a = b + c? III) Are there integers a, b, c in S such that a = b + c? IV) Does S satisfy fair division? (i.e. can S be divided into two parts with the same sum?) Some motivating problems (2) I) What is the largest integer in S? This can be done in |S| = n steps. II) If a is the first number in S, are there integers b, c in S such that a = b + c? n n2 This can be done in 2 ≈ 2 steps. III) Are there integers a, b, c in S such that a = b + c? n n3 This can be done in n 2 ≈ 2 steps. In practice Often ignore constants. Important parts are n, n2, n3. Some motivating problems (3) - Fair division IV) Does S satisfy fair division? I There are 2n ways to break S into parts A ∪ B Testing if P s = P s requires n − 2 additions and one I si ∈A i si ∈B i comparison I Solving fair division via this algorithm requires about n2n steps Question from lecture 1 Consider S with n = |S| = 1000000. Alice thinks S can’t be fairly divided while Bob thinks it can. Who (if they are correct) would find it easier to convince Carlos? Answer Bob. Since if he has a partition S = A ∪ B, that partition can be tested in about n = 1000000 steps. On the other hand, Alice should require n2n = 1000000 · 21000000 steps to prove she is right. Observation Possible answers can be tested in polynomial time. But we don’t know how to find an answer in polynomial time. P vs. NP Two important classes of problems P A problem is the class P if an answer can be found in polynomial running time. NP A problem is the class NP if, given a possible answer, that answer can be tested/certified in polynomial running time. Examples I Problems I,II,III from before are in P I Problem IV is in NP Observations I P ⊆ N P, i.e. all P problems are also NP problems I it is unknown whether P = NP or not Sorting Sorting problem Given a list of numbers a1, a2, a3, ··· , a100 reorder the list so that it is in increasing order. Observation The basic steps in solving this problem are comparison questions like Is a1 < a2? Is a3 < a7? Is a9 < a98? How many such questions are needed? n Brute force method Ask all 2 comparison questions. This solves problem in O(n2) running time. How much can this run time be improved? Worst case scenario bound on sorting Observations I When reordering a list of size n there are n! possibilities to consider. I When asking a yes/no question, in the worst case scenario you 1 still have 2 as many possibilities or more. Upshot A foolproof sorting algorithm will always need to make at least log2 n! comparison. Stirling’s approximation Fact(Stirling’s approximation) n! lim √ = 1 n→∞ n n 2πn e Consequence log2 n! ∼ n log2 n Upshot A sorting algorithm is considered optimal if its running time is O(n log n). There are many optimal algorithms: merge sort, heap sort, introsort, Timsort, Cubesort, Block Sort among others. A quick overview of merge sort Key fact Given two ordered lists of size k, merging them into a single ordered list requires about 2k comparisons. Why? Strategy: I Compare lowest elements; move smallest to merged list I Compare (new) lowest elements; move smallest to merged list I etc... Merge sort strategy n 1. Split list of size n into two sublists of size 2 2. Sort each of the two sublist (using merge sort) 3. Merge the sublists Upshot Running time satisfies r(n) = 2r(n/2) + n. One can show that this implies r(n) = O(n log n).
Recommended publications
  • Sort Algorithms 15-110 - Friday 2/28 Learning Objectives
    Sort Algorithms 15-110 - Friday 2/28 Learning Objectives • Recognize how different sorting algorithms implement the same process with different algorithms • Recognize the general algorithm and trace code for three algorithms: selection sort, insertion sort, and merge sort • Compute the Big-O runtimes of selection sort, insertion sort, and merge sort 2 Search Algorithms Benefit from Sorting We use search algorithms a lot in computer science. Just think of how many times a day you use Google, or search for a file on your computer. We've determined that search algorithms work better when the items they search over are sorted. Can we write an algorithm to sort items efficiently? Note: Python already has built-in sorting functions (sorted(lst) is non-destructive, lst.sort() is destructive). This lecture is about a few different algorithmic approaches for sorting. 3 Many Ways of Sorting There are a ton of algorithms that we can use to sort a list. We'll use https://visualgo.net/bn/sorting to visualize some of these algorithms. Today, we'll specifically discuss three different sorting algorithms: selection sort, insertion sort, and merge sort. All three do the same action (sorting), but use different algorithms to accomplish it. 4 Selection Sort 5 Selection Sort Sorts From Smallest to Largest The core idea of selection sort is that you sort from smallest to largest. 1. Start with none of the list sorted 2. Repeat the following steps until the whole list is sorted: a) Search the unsorted part of the list to find the smallest element b) Swap the found element with the first unsorted element c) Increment the size of the 'sorted' part of the list by one Note: for selection sort, swapping the element currently in the front position with the smallest element is faster than sliding all of the numbers down in the list.
    [Show full text]
  • An Evolutionary Approach for Sorting Algorithms
    ORIENTAL JOURNAL OF ISSN: 0974-6471 COMPUTER SCIENCE & TECHNOLOGY December 2014, An International Open Free Access, Peer Reviewed Research Journal Vol. 7, No. (3): Published By: Oriental Scientific Publishing Co., India. Pgs. 369-376 www.computerscijournal.org Root to Fruit (2): An Evolutionary Approach for Sorting Algorithms PRAMOD KADAM AND Sachin KADAM BVDU, IMED, Pune, India. (Received: November 10, 2014; Accepted: December 20, 2014) ABstract This paper continues the earlier thought of evolutionary study of sorting problem and sorting algorithms (Root to Fruit (1): An Evolutionary Study of Sorting Problem) [1]and concluded with the chronological list of early pioneers of sorting problem or algorithms. Latter in the study graphical method has been used to present an evolution of sorting problem and sorting algorithm on the time line. Key words: Evolutionary study of sorting, History of sorting Early Sorting algorithms, list of inventors for sorting. IntroDUCTION name and their contribution may skipped from the study. Therefore readers have all the rights to In spite of plentiful literature and research extent this study with the valid proofs. Ultimately in sorting algorithmic domain there is mess our objective behind this research is very much found in documentation as far as credential clear, that to provide strength to the evolutionary concern2. Perhaps this problem found due to lack study of sorting algorithms and shift towards a good of coordination and unavailability of common knowledge base to preserve work of our forebear platform or knowledge base in the same domain. for upcoming generation. Otherwise coming Evolutionary study of sorting algorithm or sorting generation could receive hardly information about problem is foundation of futuristic knowledge sorting problems and syllabi may restrict with some base for sorting problem domain1.
    [Show full text]
  • Overview Parallel Merge Sort
    CME 323: Distributed Algorithms and Optimization, Spring 2015 http://stanford.edu/~rezab/dao. Instructor: Reza Zadeh, Matriod and Stanford. Lecture 4, 4/6/2016. Scribed by Henry Neeb, Christopher Kurrus, Andreas Santucci. Overview Today we will continue covering divide and conquer algorithms. We will generalize divide and conquer algorithms and write down a general recipe for it. What's nice about these algorithms is that they are timeless; regardless of whether Spark or any other distributed platform ends up winning out in the next decade, these algorithms always provide a theoretical foundation for which we can build on. It's well worth our understanding. • Parallel merge sort • General recipe for divide and conquer algorithms • Parallel selection • Parallel quick sort (introduction only) Parallel selection involves scanning an array for the kth largest element in linear time. We then take the core idea used in that algorithm and apply it to quick-sort. Parallel Merge Sort Recall the merge sort from the prior lecture. This algorithm sorts a list recursively by dividing the list into smaller pieces, sorting the smaller pieces during reassembly of the list. The algorithm is as follows: Algorithm 1: MergeSort(A) Input : Array A of length n Output: Sorted A 1 if n is 1 then 2 return A 3 end 4 else n 5 L mergeSort(A[0, ..., 2 )) n 6 R mergeSort(A[ 2 , ..., n]) 7 return Merge(L, R) 8 end 1 Last lecture, we described one way where we can take our traditional merge operation and translate it into a parallelMerge routine with work O(n log n) and depth O(log n).
    [Show full text]
  • Algoritmi Za Sortiranje U Programskom Jeziku C++ Završni Rad
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Repository of the University of Rijeka SVEUČILIŠTE U RIJECI FILOZOFSKI FAKULTET U RIJECI ODSJEK ZA POLITEHNIKU Algoritmi za sortiranje u programskom jeziku C++ Završni rad Mentor završnog rada: doc. dr. sc. Marko Maliković Student: Alen Jakus Rijeka, 2016. SVEUČILIŠTE U RIJECI Filozofski fakultet Odsjek za politehniku Rijeka, Sveučilišna avenija 4 Povjerenstvo za završne i diplomske ispite U Rijeci, 07. travnja, 2016. ZADATAK ZAVRŠNOG RADA (na sveučilišnom preddiplomskom studiju politehnike) Pristupnik: Alen Jakus Zadatak: Algoritmi za sortiranje u programskom jeziku C++ Rješenjem zadatka potrebno je obuhvatiti sljedeće: 1. Napraviti pregled algoritama za sortiranje. 2. Opisati odabrane algoritme za sortiranje. 3. Dijagramima prikazati rad odabranih algoritama za sortiranje. 4. Opis osnovnih svojstava programskog jezika C++. 5. Detaljan opis tipova podataka, izvedenih oblika podataka, naredbi i drugih elemenata iz programskog jezika C++ koji se koriste u rješenjima odabranih problema. 6. Opis rješenja koja su dobivena iz napisanih programa. 7. Cjelokupan kôd u programskom jeziku C++. U završnom se radu obvezno treba pridržavati Pravilnika o diplomskom radu i Uputa za izradu završnog rada sveučilišnog dodiplomskog studija. Zadatak uručen pristupniku: 07. travnja 2016. godine Rok predaje završnog rada: ____________________ Datum predaje završnog rada: ____________________ Zadatak zadao: Doc. dr. sc. Marko Maliković 2 FILOZOFSKI FAKULTET U RIJECI Odsjek za politehniku U Rijeci, 07. travnja 2016. godine ZADATAK ZA ZAVRŠNI RAD (na sveučilišnom preddiplomskom studiju politehnike) Pristupnik: Alen Jakus Naslov završnog rada: Algoritmi za sortiranje u programskom jeziku C++ Kratak opis zadatka: Napravite pregled algoritama za sortiranje. Opišite odabrane algoritme za sortiranje.
    [Show full text]
  • Efficient Algorithms for a Mesh-Connected Computer With
    Efficient Algorithms for a Mesh-Connected Computer with Additional Global Bandwidth by Yujie An A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Science and Engineering) in the University of Michigan 2019 Doctoral Committee: Professor Quentin F. Stout, Chair Associate Professor Kevin J. Compton Professor Seth Pettie Professor Martin J. Strauss Yujie An [email protected] ORCID iD: 0000-0002-2038-8992 ©Yujie An 2019 Acknowledgments My researches are done with the help of many people. First I would like to thank my advisor, Quentin F. Stout, who introduced me to most of the topics discussed here, and helped me throughout my Ph.D. career at University of Michigan. I would also like to thank my thesis committee members, Kevin J. Compton, Seth Pettie and Martin J. Strauss, for their invaluable advice during the dissertation process. To my parents, thank you very much for your long-term support. All my achievements cannot be accomplished without your loves. To all my best friends, thank you very much for keeping me happy during my life. ii TABLE OF CONTENTS Acknowledgments ................................... ii List of Figures ..................................... v List of Abbreviations ................................. vii Abstract ......................................... viii Chapter 1 Introduction ..................................... 1 2 Preliminaries .................................... 3 2.1 Our Model..................................3 2.2 Related Work................................5 3 Basic Operations .................................. 8 3.1 Broadcast, Reduction and Scan.......................8 3.2 Rotation................................... 11 3.3 Sparse Random Access Read/Write..................... 12 3.4 Optimality.................................. 17 4 Sorting and Selecting ................................ 18 4.1 Sort..................................... 18 4.1.1 Sort in the Worst Case....................... 18 4.1.2 Sort when Only k Values are Out of Order............
    [Show full text]
  • Data Structures & Algorithms
    DATA STRUCTURES & ALGORITHMS Tutorial 6 Questions SORTING ALGORITHMS Required Questions Question 1. Many operations can be performed faster on sorted than on unsorted data. For which of the following operations is this the case? a. checking whether one word is an anagram of another word, e.g., plum and lump b. findin the minimum value. c. computing an average of values d. finding the middle value (the median) e. finding the value that appears most frequently in the data Question 2. In which case, the following sorting algorithm is fastest/slowest and what is the complexity in that case? Explain. a. insertion sort b. selection sort c. bubble sort d. quick sort Question 3. Consider the sequence of integers S = {5, 8, 2, 4, 3, 6, 1, 7} For each of the following sorting algorithms, indicate the sequence S after executing each step of the algorithm as it sorts this sequence: a. insertion sort b. selection sort c. heap sort d. bubble sort e. merge sort Question 4. Consider the sequence of integers 1 T = {1, 9, 2, 6, 4, 8, 0, 7} Indicate the sequence T after executing each step of the Cocktail sort algorithm (see Appendix) as it sorts this sequence. Advanced Questions Question 5. A variant of the bubble sorting algorithm is the so-called odd-even transposition sort . Like bubble sort, this algorithm a total of n-1 passes through the array. Each pass consists of two phases: The first phase compares array[i] with array[i+1] and swaps them if necessary for all the odd values of of i.
    [Show full text]
  • Instructor's Manual
    Instructor’s Manual Vol. 2: Presentation Material CCC Mesh/Torus Butterfly !*#? Sea Sick Hypercube Pyramid Behrooz Parhami This instructor’s manual is for Introduction to Parallel Processing: Algorithms and Architectures, by Behrooz Parhami, Plenum Series in Computer Science (ISBN 0-306-45970-1, QA76.58.P3798) 1999 Plenum Press, New York (http://www.plenum.com) All rights reserved for the author. No part of this instructor’s manual may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording, or otherwise, without written permission. Contact the author at: ECE Dept., Univ. of California, Santa Barbara, CA 93106-9560, USA ([email protected]) Introduction to Parallel Processing: Algorithms and Architectures Instructor’s Manual, Vol. 2 (4/00), Page iv Preface to the Instructor’s Manual This instructor’s manual consists of two volumes. Volume 1 presents solutions to selected problems and includes additional problems (many with solutions) that did not make the cut for inclusion in the text Introduction to Parallel Processing: Algorithms and Architectures (Plenum Press, 1999) or that were designed after the book went to print. It also contains corrections and additions to the text, as well as other teaching aids. The spring 2000 edition of Volume 1 consists of the following parts (the next edition is planned for spring 2001): Vol. 1: Problem Solutions Part I Selected Solutions and Additional Problems Part II Question Bank, Assignments, and Projects Part III Additions, Corrections, and Other Updates Part IV Sample Course Outline, Calendar, and Forms Volume 2 contains enlarged versions of the figures and tables in the text, in a format suitable for use as transparency masters.
    [Show full text]
  • Selected Sorting Algorithms
    Selected Sorting Algorithms CS 165: Project in Algorithms and Data Structures Michael T. Goodrich Some slides are from J. Miller, CSE 373, U. Washington Why Sorting? • Practical application – People by last name – Countries by population – Search engine results by relevance • Fundamental to other algorithms • Different algorithms have different asymptotic and constant-factor trade-offs – No single ‘best’ sort for all scenarios – Knowing one way to sort just isn’t enough • Many to approaches to sorting which can be used for other problems 2 Problem statement There are n comparable elements in an array and we want to rearrange them to be in increasing order Pre: – An array A of data records – A value in each data record – A comparison function • <, =, >, compareTo Post: – For each distinct position i and j of A, if i < j then A[i] ≤ A[j] – A has all the same data it started with 3 Insertion sort • insertion sort: orders a list of values by repetitively inserting a particular value into a sorted subset of the list • more specifically: – consider the first item to be a sorted sublist of length 1 – insert the second item into the sorted sublist, shifting the first item if needed – insert the third item into the sorted sublist, shifting the other items as needed – repeat until all values have been inserted into their proper positions 4 Insertion sort • Simple sorting algorithm. – n-1 passes over the array – At the end of pass i, the elements that occupied A[0]…A[i] originally are still in those spots and in sorted order.
    [Show full text]
  • Chapter 19 Searching, Sorting and Big —Solutions
    With sobs and tears he sorted out Those of the largest size … —Lewis Carroll Attempt the end, and never stand to doubt; Nothing’s so hard, but search will find it out. —Robert Herrick ’Tis in my memory lock’d, And you yourself shall keep the key of it. —William Shakespeare It is an immutable law in business that words are words, explanations are explanations, promises are promises — but only performance is reality. —Harold S. Green In this Chapter you’ll learn: ■ To search for a given value in an array using linear search and binary search. ■ To sort arrays using the iterative selection and insertion sort algorithms. ■ To sort arrays using the recursive merge sort algorithm. ■ To determine the efficiency of searching and sorting algorithms. © 2010 Pearson Education, Inc., Upper Saddle River, NJ. All Rights Reserved. 2 Chapter 19 Searching, Sorting and Big —Solutions Self-Review Exercises 19.1 Fill in the blanks in each of the following statements: a) A selection sort application would take approximately times as long to run on a 128-element array as on a 32-element array. ANS: 16, because an O(n2) algorithm takes 16 times as long to sort four times as much in- formation. b) The efficiency of merge sort is . ANS: O(n log n). 19.2 What key aspect of both the binary search and the merge sort accounts for the logarithmic portion of their respective Big Os? ANS: Both of these algorithms incorporate “halving”—somehow reducing something by half. The binary search eliminates from consideration one-half of the array after each comparison.
    [Show full text]
  • Sorting Algorithm 1 Sorting Algorithm
    Sorting algorithm 1 Sorting algorithm In computer science, a sorting algorithm is an algorithm that puts elements of a list in a certain order. The most-used orders are numerical order and lexicographical order. Efficient sorting is important for optimizing the use of other algorithms (such as search and merge algorithms) that require sorted lists to work correctly; it is also often useful for canonicalizing data and for producing human-readable output. More formally, the output must satisfy two conditions: 1. The output is in nondecreasing order (each element is no smaller than the previous element according to the desired total order); 2. The output is a permutation, or reordering, of the input. Since the dawn of computing, the sorting problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently despite its simple, familiar statement. For example, bubble sort was analyzed as early as 1956.[1] Although many consider it a solved problem, useful new sorting algorithms are still being invented (for example, library sort was first published in 2004). Sorting algorithms are prevalent in introductory computer science classes, where the abundance of algorithms for the problem provides a gentle introduction to a variety of core algorithm concepts, such as big O notation, divide and conquer algorithms, data structures, randomized algorithms, best, worst and average case analysis, time-space tradeoffs, and lower bounds. Classification Sorting algorithms used in computer science are often classified by: • Computational complexity (worst, average and best behaviour) of element comparisons in terms of the size of the list . For typical sorting algorithms good behavior is and bad behavior is .
    [Show full text]
  • Arxiv:1812.03318V1 [Cs.SE] 8 Dec 2018 Rived from Merge Sort and Insertion Sort, Designed to Work Well on Many Kinds of Real-World Data
    A Verified Timsort C Implementation in Isabelle/HOL Yu Zhang1, Yongwang Zhao1 , and David Sanan2 1 School of Computer Science and Engineering, Beihang University, Beijing, China [email protected] 2 School of Computer Science and Engineering, Nanyang Technological University, Singapore Abstract. Formal verification of traditional algorithms are of great significance due to their wide application in state-of-the-art software. Timsort is a complicated and hybrid stable sorting algorithm, derived from merge sort and insertion sort. Although Timsort implementation in OpenJDK has been formally verified, there is still not a standard and formally verified Timsort implementation in C programming language. This paper studies Timsort implementation and its formal verification using a generic imperative language - Simpl in Isabelle/HOL. Then, we manually generate an C implementation of Timsort from the verified Simpl specification. Due to the C-like concrete syntax of Simpl, the code generation is straightforward. The C implementation has also been tested by a set of random test cases. Keywords: Program Verification · Timsort · Isabelle/HOL 1 Introduction Formal verification has been considered as a promising way to the reliability of programs. With development of verification tools, it is possible to perform fully formal verification of large and complex programs in recent years [2,3]. Formal verification of traditional algorithms are of great significance due to their wide application in state-of-the-art software. The goal of this paper is the functional verification of sorting algorithms as well as generation of C source code. We investigated Timsort algorithm which is a hybrid stable sorting algorithm, de- arXiv:1812.03318v1 [cs.SE] 8 Dec 2018 rived from merge sort and insertion sort, designed to work well on many kinds of real-world data.
    [Show full text]
  • Quadratic Time Algorithms Appear to Be Optimal for Sorting Evolving Data
    Quadratic Time Algorithms Appear to be Optimal for Sorting Evolving Data Juan Jos´eBesa William E. Devanny David Eppstein Dept. of Computer Science Dept. of Computer Science Dept. of Computer Science Univ. of California, Irvine Univ. of California, Irvine Univ. of California, Irvine Irvine, CA 92697 USA Irvine, CA 92697 USA Irvine, CA 92697 USA [email protected] [email protected] [email protected] Michael T. Goodrich Timothy Johnson Dept. of Computer Science Dept. of Computer Science Univ. of California, Irvine Univ. of California, Irvine Irvine, CA 92697 USA Irvine, CA 92697 USA [email protected] [email protected] Abstract 1.1 The Evolving Data Model One scenario We empirically study sorting in the evolving data model. where the Knuthian model doesn't apply is for In this model, a sorting algorithm maintains an ap- applications where the input data is changing while an proximation to the sorted order of a list of data items algorithm is processing it, which has given rise to the while simultaneously, with each comparison made by the evolving data model [1]. In this model, input changes algorithm, an adversary randomly swaps the order of are coming so fast that it is hard for an algorithm to adjacent items in the true sorted order. Previous work keep up, much like Lucy in the classic conveyor belt 1 studies only two versions of quicksort, and has a gap scene in the TV show, I Love Lucy. Thus, rather than between the lower bound of Ω(n) and the best upper produce a single output, as in the Knuthian model, bound of O(n log log n).
    [Show full text]