Quicksort Partitioning Overview 12 I J Lo J Hi

Total Page:16

File Type:pdf, Size:1020Kb

Quicksort Partitioning Overview 12 I J Lo J Hi Algorithms ROBERT SEDGEWICK | KEVIN WAYNE 2.3 QUICKSORT ‣ quicksort ‣ selection Algorithms ‣ duplicate keys FOURTH EDITION ‣ system sorts ROBERT SEDGEWICK | KEVIN WAYNE http://algs4.cs.princeton.edu Two classic sorting algorithms: mergesort and quicksort Critical components in the world’s computational infrastructure. ・Full scientific understanding of their properties has enabled us to develop them into practical system sorts. ・Quicksort honored as one of top 10 algorithms of 20th century in science and engineering. Mergesort. [last lecture] ... Quicksort. [this lecture] ... 2 Quicksort t-shirt 3 2.3 QUICKSORT ‣ quicksort ‣ selection Algorithms ‣ duplicate keys ‣ system sorts ROBERT SEDGEWICK | KEVIN WAYNE http://algs4.cs.princeton.edu Quicksort Basic plan. ・Shuffle the array. ・Partition so that, for some j – entry a[j] is in place – no larger entry to the left of j – no smaller entry to the right of j ・Sort each subarray recursively. input Q U I C K S O R T E X A M P L E shufe K R A T E L E P U I M Q C X O S partitioning item partition E C A I E K L P U T M Q R X O S not greater not less sort left A C E E I K L P U T M Q R X O S sort right A C E E I K L M O P Q R S T U X result A C E E I K L M O P Q R S T U X Quicksort overview 5 number). 9.9 X 10 45 is used to represent infinity. Imaginary conunent I and J are output variables, and A is the array (with valuesTony of x may Hoarenot be negative and reM values of x may not be subscript bounds M:N) which is operated upon by this procedure. smaller than 1. Partition takes the value X of a random element of the array A, Values of Qd~'(x) may be calculated easily by hypergeometric and rearranges the values of the elements of the array in such a series if x is not too small nor (n - m) too large. Q~m(x) can be way that there exist integers I and J with the following properties : computed from an appropriate set of values of Pnm(X) if X is near M _-< J < I =< NprovidedM < N 1.0 or ix is near 0. Loss of significant digits occurs for x as small as A[R] =< XforM =< R _-< J 1.1 if n is largerInvented than 10. Loss of significant quicksort digits is a major diffi-to translateA[R] = RussianXforJ < R < I into English. culty・ in using finite polynomiM representations also if n is larger A[R] ~ Xfor I =< R ~ N than m. However, QLEG has been tested in regions of x and n The procedure uses an integer procedure random (M,N) which both large and[ butsmall; couldn't explain hischooses algorithmequiprobably a random integer or F implementbetween M and N, and it! ] procedure・ QLEG(m, nmax, x, ri, R, Q); value In, nmax, x, ri; also a procedure exchange, which exchanges the values of its two real In, mnax, x, ri; real array R, Q; parameters ; begin real t, i, n, q0, s; begin real X; integer F; ・n :=Learned 20; Algol 60 (and recursion).F := random (M,N); X := A[F]; if nmax > 13 then I:=M; J:=N; n := nmax +7; up: for I : = I step 1 until N do if Implementedri = 0then quicksort. if X < A [I] then go to down; ・ begin ifm = 0then I:=N; Q[0] := 0.5 X 10g((x + 1)/(x - 1)) down: forJ := J step --1 until M do else if A[J]<X then go to change; begin t := --1.0/sqrt(x X x-- 1); J:=M; q0 := 0; change: if I < J then begin exchange (A[IL A[J]); Tony Hoare Q[O] : = t; I := I+ 1;J:= J - 1; for i : = 1 step 1 until m do go to up 1980 Turing Award begin s := (x+x)X(i-1)Xt end 2: begin 4 × Q[0]+ (3i-i× i-2)× q0; else if [ < F then begin exchange (A[IL A[F]) i if c >= 0 then q0 := Q[0]; I:=I+l 3:begin Q[0] := s end end; end e:= aXe;f := bXd;goto8 if x = 1 then else if F < J tllen begin exchange (A[F], A[J]) ; end 3 ; Q[0] := 9.9 I" 45; J:=J-1 e:=bXc; R[n + 1] := x - sqrt(x X x - 1); end ; ifd ~ 0then for i := n step --1 until 1 do end partition 4: begin R[i] := (i + m)/((i + i + 1) X x f:=bXd; goto8 +(m-i- 1) X R[i+l]); end 4; ALGORITHM 61 go to the end; f:=aXd; goto8 if m = 0 then ALGORITHMPROCEDURES 64FOR RANGE ARITHMETIC 5: end 2; begin if x < 0.5 tben QUICKSORTALLAN GIBB* ifb > 0 then Q[0] := arctan(x) - 1.5707963 else C.University A. R. HOARE of Alberta, Calgary, Alberta, Canada 6: begin Q[0] := - aretan(1/x)end else Elliottbegin Brothers Ltd., Borehamwood, Hertfordshire, Eng. if d > 0 then begin t := 1/sqrt(x X x + 1); begin procedure RANGESUM (a, b, c, d, e, f); q0 := 0; procedure quicksort (A,M,N); value M,N; e := MIN(a X d, b X c); real a,b,c,d,e,f; q[0] := t; array A; integer M,N; f:= MAX(a X c,b X d); go to8 comment The term "range number" was used by P. S. Dwyer, for i : = 2 step 1 until m do comment Quicksort is a very fast and convenient method of end 6; Linear Computations (Wiley, 1951). Machine procedures for begin s := (x + x) X (i -- 1) X t X Q[0I sorting an array in the random-access store of a computer. The e:= bX c; f := aX c; go to8 range arithmetic were developed about 1958 by Ramon Moore, +(3i+iX i -- 2) × q0; entire contents of the store may be sorted, since no extra space is end 5; "Automatic Error Analysis in Digital Computation," LMSD qO := Q[0]; required. The average number of comparisons made is 2(M--N) In f:=aXc; Report 48421, 28 Jan. 1959, Lockheed Missiles and Space Divi- Q[0] := s end end; (N--M), and the average nmnber of exchanges is one sixth this if d _-< O then sion, Palo Alto, California, 59 pp. If a _< x -< b and c ~ y ~ d, R[n + 1] := x - sqrt(x × x + 1); amount. Suitable refinements of this method will be desirable for 7: begin then RANGESUM yields an interval [e, f] such that e =< (x + y) for i := n step - 1 until 1 do its implementation on any actual computer; e:=bXd; goto8 f. Because of machine operation (truncation or rounding) the R[i] := (i + m)/((i -- m + 1) × R[i + 1] begin integer 1,J ; end 7 ; machine sums a -4- c and b -4- d may not provide safe end-points --(i+i+ 1) X x); if M < N then begin partition (A,M,N,I,J); e:=aXd; of the output interval. Thus RANGESUM requires a non-local for i : = 1 step 2 until nmax do quicksort (A,M,J) ; 8: e := ADJUSTPROD (e, -1); real procedure ADJUSTSUM which will compensate for the Rill := -- Rill; quicksort (A, I, N) f := ADJUSTPROD (f, 1) machine arithmetic. The body of ADJUSTSUM will be de- the: for i : = 1 step 1 until nnmx do end end RANGEMPY; pendent upon the type of machine for which it is written and so Q[i] := Q[i - 1] X R[i] end quieksort procedure RANGEDVD (a, b, c, d, e, f) ; is not given here. (An example, however, appears below.) It end QLEG; real a, b, c, d, e, f; is assumed that ADJUSTSUM has as parameters real v and w, comment If the range divisor includes zero the program * This procedure was developed in part under the sponsorship and integer i, and is accompanied by a non-local real procedure exists to a non-local label "zerodvsr". RANGEDVD assumes a of the Air Force Cambridge Research Center. CORRECTION which gives an upper bound to the magnitude CommunicationsALGORITHM 65 of the ACM (July 1961) non-local real procedure ADJUSTQUOT which is analogous of the error involved in the machine representation of a number. FIND (possibly identical) to ADJUSTPROD; The output ADJUSTSUM provides the left end-point of the 6 begin C.output A. R. interval HOARE of RANGESUM when ADJUSTSUM is called ALGORITHM 63 if c =< 0 A d ~ 0 then go to zer0dvsr; Elliottwith i Brothers= --1, and Ltd., the right Borehamwood, end-point when Hertfordshire, called with i Eng.= 1 if c < 0 then PARTITION The procedures RANGESUB, RANGEMPY, and RANGEDVD procedure find (A,M,N,K); value M,N,K; C. A. R. HOARE provide for the remaining fundamental operations in range 1: begin array A; integer M,N,K; ifb > 0 then Elliott Brothers Ltd., Borehamwood, Hertfordshire, Eng. arithmetic. RANGESQR gives an interval within which the comment Find will assign to A [K] the value which it would 2: begin square of a range nmnber must lie. RNGSUMC, RNGSUBC, procedure partition (A,M,N,I,J); value M,N; have if the array A [M:N] had been sorted. The array A will be e := b/d; go to3 RNGMPYC and RNGDVDC provide for range arithmetic with array A; integer M,N,1,J; partly sorted, and subsequent entries will be faster than the first; end 2; complex range arguments, i.e.
Recommended publications
  • Secure Multi-Party Sorting and Applications
    Secure Multi-Party Sorting and Applications Kristján Valur Jónsson1, Gunnar Kreitz2, and Misbah Uddin2 1 Reykjavik University 2 KTH—Royal Institute of Technology Abstract. Sorting is among the most fundamental and well-studied problems within computer science and a core step of many algorithms. In this article, we consider the problem of constructing a secure multi-party computing (MPC) protocol for sorting, building on previous results in the field of sorting networks. Apart from the immediate uses for sorting, our protocol can be used as a building-block in more complex algorithms. We present a weighted set intersection algorithm, where each party inputs a set of weighted ele- ments and the output consists of the input elements with their weights summed. As a practical example, we apply our protocols in a network security setting for aggregation of security incident reports from multi- ple reporters, specifically to detect stealthy port scans in a distributed but privacy preserving manner. Both sorting and weighted set inter- section use O`n log2 n´ comparisons in O`log2 n´ rounds with practical constants. Our protocols can be built upon any secret sharing scheme supporting multiplication and addition. We have implemented and evaluated the performance of sorting on the Sharemind secure multi-party computa- tion platform, demonstrating the real-world performance of our proposed protocols. Keywords. Secure multi-party computation; Sorting; Aggregation; Co- operative anomaly detection 1 Introduction Intrusion Detection Systems (IDS) [16] are commonly used to detect anoma- lous and possibly malicious network traffic. Incidence reports and alerts from such systems are generally kept private, although much could be gained by co- operative sharing [30].
    [Show full text]
  • Mergesort and Quicksort ! Merge Two Halves to Make Sorted Whole
    Mergesort Basic plan: ! Divide array into two halves. ! Recursively sort each half. Mergesort and Quicksort ! Merge two halves to make sorted whole. • mergesort • mergesort analysis • quicksort • quicksort analysis • animations Reference: Algorithms in Java, Chapters 7 and 8 Copyright © 2007 by Robert Sedgewick and Kevin Wayne. 1 3 Mergesort and Quicksort Mergesort: Example Two great sorting algorithms. ! Full scientific understanding of their properties has enabled us to hammer them into practical system sorts. ! Occupy a prominent place in world's computational infrastructure. ! Quicksort honored as one of top 10 algorithms of 20th century in science and engineering. Mergesort. ! Java sort for objects. ! Perl, Python stable. Quicksort. ! Java sort for primitive types. ! C qsort, Unix, g++, Visual C++, Python. 2 4 Merging Merging. Combine two pre-sorted lists into a sorted whole. How to merge efficiently? Use an auxiliary array. l i m j r aux[] A G L O R H I M S T mergesort k mergesort analysis a[] A G H I L M quicksort quicksort analysis private static void merge(Comparable[] a, Comparable[] aux, int l, int m, int r) animations { copy for (int k = l; k < r; k++) aux[k] = a[k]; int i = l, j = m; for (int k = l; k < r; k++) if (i >= m) a[k] = aux[j++]; merge else if (j >= r) a[k] = aux[i++]; else if (less(aux[j], aux[i])) a[k] = aux[j++]; else a[k] = aux[i++]; } 5 7 Mergesort: Java implementation of recursive sort Mergesort analysis: Memory Q. How much memory does mergesort require? A. Too much! public class Merge { ! Original input array = N.
    [Show full text]
  • 2.3 Quicksort
    2.3 Quicksort ‣ quicksort ‣ selection ‣ duplicate keys ‣ system sorts Algorithms, 4th Edition · Robert Sedgewick and Kevin Wayne · Copyright © 2002–2010 · January 30, 2011 12:46:16 PM Two classic sorting algorithms Critical components in the world’s computational infrastructure. • Full scientific understanding of their properties has enabled us to develop them into practical system sorts. • Quicksort honored as one of top 10 algorithms of 20th century in science and engineering. Mergesort. last lecture • Java sort for objects. • Perl, C++ stable sort, Python stable sort, Firefox JavaScript, ... Quicksort. this lecture • Java sort for primitive types. • C qsort, Unix, Visual C++, Python, Matlab, Chrome JavaScript, ... 2 ‣ quicksort ‣ selection ‣ duplicate keys ‣ system sorts 3 Quicksort Basic plan. • Shuffle the array. • Partition so that, for some j - element a[j] is in place - no larger element to the left of j j - no smaller element to the right of Sir Charles Antony Richard Hoare • Sort each piece recursively. 1980 Turing Award input Q U I C K S O R T E X A M P L E shu!e K R A T E L E P U I M Q C X O S partitioning element partition E C A I E K L P U T M Q R X O S not greater not less sort left A C E E I K L P U T M Q R X O S sort right A C E E I K L M O P Q R S T U X result A C E E I K L M O P Q R S T U X Quicksort overview 4 Quicksort partitioning Basic plan.
    [Show full text]
  • An Evolutionary Approach for Sorting Algorithms
    ORIENTAL JOURNAL OF ISSN: 0974-6471 COMPUTER SCIENCE & TECHNOLOGY December 2014, An International Open Free Access, Peer Reviewed Research Journal Vol. 7, No. (3): Published By: Oriental Scientific Publishing Co., India. Pgs. 369-376 www.computerscijournal.org Root to Fruit (2): An Evolutionary Approach for Sorting Algorithms PRAMOD KADAM AND Sachin KADAM BVDU, IMED, Pune, India. (Received: November 10, 2014; Accepted: December 20, 2014) ABstract This paper continues the earlier thought of evolutionary study of sorting problem and sorting algorithms (Root to Fruit (1): An Evolutionary Study of Sorting Problem) [1]and concluded with the chronological list of early pioneers of sorting problem or algorithms. Latter in the study graphical method has been used to present an evolution of sorting problem and sorting algorithm on the time line. Key words: Evolutionary study of sorting, History of sorting Early Sorting algorithms, list of inventors for sorting. IntroDUCTION name and their contribution may skipped from the study. Therefore readers have all the rights to In spite of plentiful literature and research extent this study with the valid proofs. Ultimately in sorting algorithmic domain there is mess our objective behind this research is very much found in documentation as far as credential clear, that to provide strength to the evolutionary concern2. Perhaps this problem found due to lack study of sorting algorithms and shift towards a good of coordination and unavailability of common knowledge base to preserve work of our forebear platform or knowledge base in the same domain. for upcoming generation. Otherwise coming Evolutionary study of sorting algorithm or sorting generation could receive hardly information about problem is foundation of futuristic knowledge sorting problems and syllabi may restrict with some base for sorting problem domain1.
    [Show full text]
  • Improving the Performance of Bubble Sort Using a Modified Diminishing Increment Sorting
    Scientific Research and Essay Vol. 4 (8), pp. 740-744, August, 2009 Available online at http://www.academicjournals.org/SRE ISSN 1992-2248 © 2009 Academic Journals Full Length Research Paper Improving the performance of bubble sort using a modified diminishing increment sorting Oyelami Olufemi Moses Department of Computer and Information Sciences, Covenant University, P. M. B. 1023, Ota, Ogun State, Nigeria. E- mail: [email protected] or [email protected]. Tel.: +234-8055344658. Accepted 17 February, 2009 Sorting involves rearranging information into either ascending or descending order. There are many sorting algorithms, among which is Bubble Sort. Bubble Sort is not known to be a very good sorting algorithm because it is beset with redundant comparisons. However, efforts have been made to improve the performance of the algorithm. With Bidirectional Bubble Sort, the average number of comparisons is slightly reduced and Batcher’s Sort similar to Shellsort also performs significantly better than Bidirectional Bubble Sort by carrying out comparisons in a novel way so that no propagation of exchanges is necessary. Bitonic Sort was also presented by Batcher and the strong point of this sorting procedure is that it is very suitable for a hard-wired implementation using a sorting network. This paper presents a meta algorithm called Oyelami’s Sort that combines the technique of Bidirectional Bubble Sort with a modified diminishing increment sorting. The results from the implementation of the algorithm compared with Batcher’s Odd-Even Sort and Batcher’s Bitonic Sort showed that the algorithm performed better than the two in the worst case scenario. The implication is that the algorithm is faster.
    [Show full text]
  • Overview Parallel Merge Sort
    CME 323: Distributed Algorithms and Optimization, Spring 2015 http://stanford.edu/~rezab/dao. Instructor: Reza Zadeh, Matriod and Stanford. Lecture 4, 4/6/2016. Scribed by Henry Neeb, Christopher Kurrus, Andreas Santucci. Overview Today we will continue covering divide and conquer algorithms. We will generalize divide and conquer algorithms and write down a general recipe for it. What's nice about these algorithms is that they are timeless; regardless of whether Spark or any other distributed platform ends up winning out in the next decade, these algorithms always provide a theoretical foundation for which we can build on. It's well worth our understanding. • Parallel merge sort • General recipe for divide and conquer algorithms • Parallel selection • Parallel quick sort (introduction only) Parallel selection involves scanning an array for the kth largest element in linear time. We then take the core idea used in that algorithm and apply it to quick-sort. Parallel Merge Sort Recall the merge sort from the prior lecture. This algorithm sorts a list recursively by dividing the list into smaller pieces, sorting the smaller pieces during reassembly of the list. The algorithm is as follows: Algorithm 1: MergeSort(A) Input : Array A of length n Output: Sorted A 1 if n is 1 then 2 return A 3 end 4 else n 5 L mergeSort(A[0, ..., 2 )) n 6 R mergeSort(A[ 2 , ..., n]) 7 return Merge(L, R) 8 end 1 Last lecture, we described one way where we can take our traditional merge operation and translate it into a parallelMerge routine with work O(n log n) and depth O(log n).
    [Show full text]
  • Algoritmi Za Sortiranje U Programskom Jeziku C++ Završni Rad
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Repository of the University of Rijeka SVEUČILIŠTE U RIJECI FILOZOFSKI FAKULTET U RIJECI ODSJEK ZA POLITEHNIKU Algoritmi za sortiranje u programskom jeziku C++ Završni rad Mentor završnog rada: doc. dr. sc. Marko Maliković Student: Alen Jakus Rijeka, 2016. SVEUČILIŠTE U RIJECI Filozofski fakultet Odsjek za politehniku Rijeka, Sveučilišna avenija 4 Povjerenstvo za završne i diplomske ispite U Rijeci, 07. travnja, 2016. ZADATAK ZAVRŠNOG RADA (na sveučilišnom preddiplomskom studiju politehnike) Pristupnik: Alen Jakus Zadatak: Algoritmi za sortiranje u programskom jeziku C++ Rješenjem zadatka potrebno je obuhvatiti sljedeće: 1. Napraviti pregled algoritama za sortiranje. 2. Opisati odabrane algoritme za sortiranje. 3. Dijagramima prikazati rad odabranih algoritama za sortiranje. 4. Opis osnovnih svojstava programskog jezika C++. 5. Detaljan opis tipova podataka, izvedenih oblika podataka, naredbi i drugih elemenata iz programskog jezika C++ koji se koriste u rješenjima odabranih problema. 6. Opis rješenja koja su dobivena iz napisanih programa. 7. Cjelokupan kôd u programskom jeziku C++. U završnom se radu obvezno treba pridržavati Pravilnika o diplomskom radu i Uputa za izradu završnog rada sveučilišnog dodiplomskog studija. Zadatak uručen pristupniku: 07. travnja 2016. godine Rok predaje završnog rada: ____________________ Datum predaje završnog rada: ____________________ Zadatak zadao: Doc. dr. sc. Marko Maliković 2 FILOZOFSKI FAKULTET U RIJECI Odsjek za politehniku U Rijeci, 07. travnja 2016. godine ZADATAK ZA ZAVRŠNI RAD (na sveučilišnom preddiplomskom studiju politehnike) Pristupnik: Alen Jakus Naslov završnog rada: Algoritmi za sortiranje u programskom jeziku C++ Kratak opis zadatka: Napravite pregled algoritama za sortiranje. Opišite odabrane algoritme za sortiranje.
    [Show full text]
  • Lecture 16: Lower Bounds for Sorting
    Lecture Notes CMSC 251 To bound this, recall the integration formula for bounding summations (which we paraphrase here). For any monotonically increasing function f(x) Z Xb−1 b f(i) ≤ f(x)dx: i=a a The function f(x)=xln x is monotonically increasing, and so we have Z n S(n) ≤ x ln xdx: 2 If you are a calculus macho man, then you can integrate this by parts, and if you are a calculus wimp (like me) then you can look it up in a book of integrals Z n 2 2 n 2 2 2 2 x x n n n n x ln xdx = ln x − = ln n − − (2 ln 2 − 1) ≤ ln n − : 2 2 4 x=2 2 4 2 4 This completes the summation bound, and hence the entire proof. Summary: So even though the worst-case running time of QuickSort is Θ(n2), the average-case running time is Θ(n log n). Although we did not show it, it turns out that this doesn’t just happen much of the time. For large values of n, the running time is Θ(n log n) with high probability. In order to get Θ(n2) time the algorithm must make poor choices for the pivot at virtually every step. Poor choices are rare, and so continuously making poor choices are very rare. You might ask, could we make QuickSort deterministic Θ(n log n) by calling the selection algorithm to use the median as the pivot. The answer is that this would work, but the resulting algorithm would be so slow practically that no one would ever use it.
    [Show full text]
  • Quick Sort Algorithm Song Qin Dept
    Quick Sort Algorithm Song Qin Dept. of Computer Sciences Florida Institute of Technology Melbourne, FL 32901 ABSTRACT each iteration. Repeat this on the rest of the unsorted region Given an array with n elements, we want to rearrange them in without the first element. ascending order. In this paper, we introduce Quick Sort, a Bubble sort works as follows: keep passing through the list, divide-and-conquer algorithm to sort an N element array. We exchanging adjacent element, if the list is out of order; when no evaluate the O(NlogN) time complexity in best case and O(N2) exchanges are required on some pass, the list is sorted. in worst case theoretically. We also introduce a way to approach the best case. Merge sort [4] has a O(NlogN) time complexity. It divides the 1. INTRODUCTION array into two subarrays each with N/2 items. Conquer each Search engine relies on sorting algorithm very much. When you subarray by sorting it. Unless the array is sufficiently small(one search some key word online, the feedback information is element left), use recursion to do this. Combine the solutions to brought to you sorted by the importance of the web page. the subarrays by merging them into single sorted array. 2 Bubble, Selection and Insertion Sort, they all have an O(N2) time In Bubble sort, Selection sort and Insertion sort, the O(N ) time complexity that limits its usefulness to small number of element complexity limits the performance when N gets very big. no more than a few thousand data points.
    [Show full text]
  • Heapsort Vs. Quicksort
    Heapsort vs. Quicksort Most groups had sound data and observed: – Random problem instances • Heapsort runs perhaps 2x slower on small instances • It’s even slower on larger instances – Nearly-sorted instances: • Quicksort is worse than Heapsort on large instances. Some groups counted comparisons: • Heapsort uses more comparisons on random data Most groups concluded: – Experiments show that MH2 predictions are correct • At least for random data 1 CSE 202 - Dynamic Programming Sorting Random Data N Time (us) Quicksort Heapsort 10 19 21 100 173 293 1,000 2,238 5,289 10,000 28,736 78,064 100,000 355,949 1,184,493 “HeapSort is definitely growing faster (in running time) than is QuickSort. ... This lends support to the MH2 model.” Does it? What other explanations are there? 2 CSE 202 - Dynamic Programming Sorting Random Data N Number of comparisons Quicksort Heapsort 10 54 56 100 987 1,206 1,000 13,116 18,708 10,000 166,926 249,856 100,000 2,050,479 3,136,104 But wait – the number of comparisons for Heapsort is also going up faster that for Quicksort. This has nothing to do with the MH2 analysis. How can we see if MH2 analysis is relevant? 3 CSE 202 - Dynamic Programming Sorting Random Data N Time (us) Compares Time / compare (ns) Quicksort Heapsort Quicksort Heapsort Quicksort Heapsort 10 19 21 54 56 352 375 100 173 293 987 1,206 175 243 1,000 2,238 5,289 13,116 18,708 171 283 10,000 28,736 78,064 166,926 249,856 172 312 100,000 355,949 1,184,493 2,050,479 3,136,104 174 378 Nice data! – Why does N = 10 take so much longer per comparison? – Why does Heapsort always take longer than Quicksort? – Is Heapsort growth as predicted by MH2 model? • Is N large enough to be interesting?? (Machine is a Sun Ultra 10) 4 CSE 202 - Dynamic Programming ..
    [Show full text]
  • Selected Sorting Algorithms
    Selected Sorting Algorithms CS 165: Project in Algorithms and Data Structures Michael T. Goodrich Some slides are from J. Miller, CSE 373, U. Washington Why Sorting? • Practical application – People by last name – Countries by population – Search engine results by relevance • Fundamental to other algorithms • Different algorithms have different asymptotic and constant-factor trade-offs – No single ‘best’ sort for all scenarios – Knowing one way to sort just isn’t enough • Many to approaches to sorting which can be used for other problems 2 Problem statement There are n comparable elements in an array and we want to rearrange them to be in increasing order Pre: – An array A of data records – A value in each data record – A comparison function • <, =, >, compareTo Post: – For each distinct position i and j of A, if i < j then A[i] ≤ A[j] – A has all the same data it started with 3 Insertion sort • insertion sort: orders a list of values by repetitively inserting a particular value into a sorted subset of the list • more specifically: – consider the first item to be a sorted sublist of length 1 – insert the second item into the sorted sublist, shifting the first item if needed – insert the third item into the sorted sublist, shifting the other items as needed – repeat until all values have been inserted into their proper positions 4 Insertion sort • Simple sorting algorithm. – n-1 passes over the array – At the end of pass i, the elements that occupied A[0]…A[i] originally are still in those spots and in sorted order.
    [Show full text]
  • Advanced Topics in Sorting
    Advanced Topics in Sorting complexity system sorts duplicate keys comparators 1 complexity system sorts duplicate keys comparators 2 Complexity of sorting Computational complexity. Framework to study efficiency of algorithms for solving a particular problem X. Machine model. Focus on fundamental operations. Upper bound. Cost guarantee provided by some algorithm for X. Lower bound. Proven limit on cost guarantee of any algorithm for X. Optimal algorithm. Algorithm with best cost guarantee for X. lower bound ~ upper bound Example: sorting. • Machine model = # comparisons access information only through compares • Upper bound = N lg N from mergesort. • Lower bound ? 3 Decision Tree a < b yes no code between comparisons (e.g., sequence of exchanges) b < c a < c yes no yes no a b c b a c a < c b < c yes no yes no a c b c a b b c a c b a 4 Comparison-based lower bound for sorting Theorem. Any comparison based sorting algorithm must use more than N lg N - 1.44 N comparisons in the worst-case. Pf. Assume input consists of N distinct values a through a . • 1 N • Worst case dictated by tree height h. N ! different orderings. • • (At least) one leaf corresponds to each ordering. Binary tree with N ! leaves cannot have height less than lg (N!) • h lg N! lg (N / e) N Stirling's formula = N lg N - N lg e N lg N - 1.44 N 5 Complexity of sorting Upper bound. Cost guarantee provided by some algorithm for X. Lower bound. Proven limit on cost guarantee of any algorithm for X.
    [Show full text]