Introduction to Algorithms

Total Page:16

File Type:pdf, Size:1020Kb

Introduction to Algorithms Comppgarison Sorting Review Introduction to Algorithms z Insertion sort: Sorting in Linear Time z Pro’s: z Easy to code z Fast on small inputs (less than ~50 elements) CSE 680 z Fast on nearly-sorted inputs Prof. Roger Crawfis z Con’s: z O(n2) worst case z O(n2) average case z O(n2) reverse-sorted case Comppgarison Sorting Review Comppgarison Sorting Review z Merge sort: z Heap sort: z Divide-and-conquer: z Uses the very useful heap data structure z Sppylit array in half z Comppylete binary tree z Recursively sort sub-arrays z Heap property: parent key > children’s keys z Linear-time merge step z Pro’s: z P’Pro’s: z O(n lg n) worst case - asymptotically optimal for z O(n lg n) worst case - asymptotically optimal for comparison sorts compa riso n sort s z Sorts in ppacelace z Con’s: z Con’s: z Doesn’t sort in place z Fair amount of shuffling memory around Comppgarison Sorting Review Non-Comparison Based Sorting z Quick sort: z Many times we have restrictions on our z Divide-and-conquer: keys z Partition array into two sub-arrays, recursively sort z Deck of cards: Ace->King and four suites z All of first sub-array < all of second sub-array z Social Security Numbers z Pro’s: z Employee ID’s z O(n lg n) average case z We will examine three algorithms which z StiSorts in p lace z Fast in practice (why?) under certain conditions can run in O(n) z Con’s: time. z O(n2) worst case z CtitCounting sort z Naïve implementation: worst case on sorted input z Radix sort z Good partitioning makes this very unlikely. z Bucket sort Counting Sort Counting Sort z Depends on assumption about the 1 CountingSort(A, B, k) 2 for i=1 to k numbers being sorted 3 C[i]= 0; This is called a histogram. z Assume numbers are in the range 1.. k 4 for jj1=1ton to n 5 C[A[j]] += 1; z The algorithm: 6 for i=2 to k z ItA[1Input: A[1..n], wh ere A[j] ∈ {123{1, 2, 3, …, k} 7 C[i] = C[i] + C[i-1]; 8 for j=n downto 1 z Output: B[1..n], sorted (not sorted in place) 9 B[C[A[j]]] = A[j]; z Also: Array C[1..k] for auxiliary storage 10 C[A[j]] -= 1; Countinggp Sort Example Counting Sort 1 CountingSort(A, B, k) 2 for i=1 to k Takes time O(k) 3 C[i]= 0; 4 for jj1=1ton to n 5 C[A[j]] += 1; 6 for i=2 to k Takes time O(n) 7 C[i]C[] = C[i] + C[i-1]; 8 for j=n downto 1 9 B[C[A[j]]] = A[j]; 10 C[A[j]] -= 1; What is the running time? Counting Sort Counting Sort z Total time: O( n + k) z Why don’ t we always use counting sort? z Works well if k = O(n) or k = O(1) z Depends on range k of elements. z This algorithm / implementation is stable. z A sorting algorithm is stable when numbers z Could we use counting sort to sort 32 bit with the same values appear in the output itintegers ?Wh? Why or w hy no t? array in the same order as they do in the input array. Counting Sort Review Radix Sort z Assumption: input taken from small set of numbers of size k z How did IBM get rich originally? z Basic idea: z Answer: punched card readers for z Count number of elements less than you for each element. z This gi ves the position of that nu mber – similar to selection census tabulation in early 1900’ s. sort. z Pro’s: z In particular, a card sorter that could sort z Fast cards into different bins z Asymptotically fast - O(n+k) z Simple to code z Each column can be punched in 12 places z Con’s : z Decimal digits use 10 places z Doesn’t sort in place. z Elements must be integers. countable z Problem: only one column can be sorted on z Requires O(n+k) extra storage. at a time Radix Sort Radix Sort Example z Intuitively, you might sort on the most significant digit, then the second msd, etc. z Problem: lots of intermediate piles of cards (read: scratch arrays) to keep track of z Key idea: sort the least significant digit first RadixSort(A, d) for i=1 to d StableSort(A) on digit i Radix Sort Correctness Radix Sort z Sketch of an inductive proof of correctness z What sort is used to sort on digits? (induction on the number of passes): z Counting sort is obvious choice: z Assume lower-order digits {j: j<i }are sorted z Sort n numbers on digits that range from 1..k z Show that sorting next digit i leaves array correctly sorted z Time: O(n + k) z If two digits at position i are different, ordering z Each pass over n numbers with d digits numbers by that digit is correct (lower-order digits irrelevant) takes time O(n+k), so total time O(dn+dk) z If they are the same , numbers are already sorted on z When d is constant and k=O(n), takes O(n) the lower-order digits. Since we use a stable sort, time the numbers stay in the right order Radix Sort Radix Sort Review z Assumption: input has d digits ranging from 0 to k z Problem: sort 1 million 64-bit numbers z Basic idea: z Treat as four-digit radix 216 numbers z Sort elements by digit starting with least significant z Use a stable sort (like counting sort) for each stage z Can sort in just four passes with radix sort! z P’Pro’s: z Fast z Performs well compared to typical z Asymptotically fast (i.e., O(n) when d is constant and k=O(n)) O(n lg n)it) comparison sort z Simple to code z A good choice z Approx lg(1,000,000) ≅ 20 comparisons per z Con’s: numbbiber being sortdted z Doesn’ t sort in place z Not a good choice for floating point numbers or arbitrary strings. Bucket Sort Bucket Sort Assumption: input elements distributed uniformly over some known Bucket-Sort(A, x, y) range, e.g., [0,1), so all elements in A are greater than or equal to 0 but less 1. divide interval [x, y) into n equal-sized subintervals (buckets) than 1 . (Appendix C.2 has definition of uniform distribution) 2. distribute the n input keys into the buckets 3. sort the numbers in each bucket (e.g., with insertion sort) Bucket-Sort(A) 4. scan the (sorted) buckets in order and produce output array 1. n = length[A] 2. for i = 1 to n Running time of bucket sort: O(n) expected time 3. do inser t A[i] in to lis t B[floor o f n A[i]] Step 1: O(1) for each interval = O(n) time total. 4. for i = 0 to n-1 Step 2: O(n) time. 5. do sort list i with Insertion-Sort Step 3: The expected number of elements in each bucket is O(1) 6. Concatenate lists B[0], B[1],…, B[n-1] (bkffl(see book for formal argumen t, sec ti84)tion 8.4), so ttlitotal is O(n ) Step 4: O(n) time to scan the n buckets containing a total of n input elements Bucket Sort Example Bucket Sort Review z Assumption: input is uniformly distributed across a range z Basic idea: z Partition the range into a fixed number of buckets. z Toss each element into its appropriate bucket. z StSort eac hbkth bucket. z Pro’s: z Fast z Asymptoticall y fast (i. e., O(n) w hen distribu tion is u niform) z Simple to code z Good for a rough sort. z Con’ s: z Doesn’t sort in place Summaryyg of Linear Sorting Non-Comparison Based Sorts Running Time worst-case average-case best-case in place Counting Sort O(n + k) O(n + k) O(n + k) no Radix Sort O(d(n + k')) O(d(n + k')) O(d(n + k')) no Bucket Sort O(n) no Counting sort assumes input elements are in range [0, 1, 2,.., k] and uses array indexing to count the number of occurrences of each value. Radix sort assumes each integer consists of d digits, and each digit is in range [1,2,..,k']. Bucket sort requires advance knowledge of input distribution (sorts n numbiflbers uniformly dis tr ibu te d in range in O(n)ti) time )..
Recommended publications
  • Radix Sort Comparison Sort Runtime of O(N*Log(N)) Is Optimal
    Radix Sort Comparison sort runtime of O(n*log(n)) is optimal • The problem of sorting cannot be solved using comparisons with less than n*log(n) time complexity • See Proposition I in Chapter 2.2 of the text How can we sort without comparison? • Consider the following approach: • Look at the least-significant digit • Group numbers with the same digit • Maintain relative order • Place groups back in array together • I.e., all the 0’s, all the 1’s, all the 2’s, etc. • Repeat for increasingly significant digits The characteristics of Radix sort • Least significant digit (LSD) Radix sort • a fast stable sorting algorithm • begins at the least significant digit (e.g. the rightmost digit) • proceeds to the most significant digit (e.g. the leftmost digit) • lexicographic orderings Generally Speaking • 1. Take the least significant digit (or group of bits) of each key • 2. Group the keys based on that digit, but otherwise keep the original order of keys. • This is what makes the LSD radix sort a stable sort. • 3. Repeat the grouping process with each more significant digit Generally Speaking public static void sort(String [] a, int W) { int N = a.length; int R = 256; String [] aux = new String[N]; for (int d = W - 1; d >= 0; --d) { aux = sorted array a by the dth character a = aux } } A Radix sort example A Radix sort example A Radix sort example • Problem: How to ? • Group the keys based on that digit, • but otherwise keep the original order of keys. Key-indexed counting • 1. Take the least significant digit (or group of bits) of each key • 2.
    [Show full text]
  • Can We Overcome the Nlog N Barrier for Oblivious Sorting?
    Can We Overcome the n log n Barrier for Oblivious Sorting? ∗ Wei-Kai Lin y Elaine Shi z Tiancheng Xie x Abstract It is well-known that non-comparison-based techniques can allow us to sort n elements in o(n log n) time on a Random-Access Machine (RAM). On the other hand, it is a long-standing open question whether (non-comparison-based) circuits can sort n elements from the domain [1::2k] with o(kn log n) boolean gates. We consider weakened forms of this question: first, we consider a restricted class of sorting where the number of distinct keys is much smaller than the input length; and second, we explore Oblivious RAMs and probabilistic circuit families, i.e., computational models that are somewhat more powerful than circuits but much weaker than RAM. We show that Oblivious RAMs and probabilistic circuit families can sort o(log n)-bit keys in o(n log n) time or o(kn log n) circuit complexity. Our algorithms work in the indivisible model, i.e., not only can they sort an array of numerical keys | if each key additionally carries an opaque ball, our algorithms can also move the balls into the correct order. We further show that in such an indivisible model, it is impossible to sort Ω(log n)-bit keys in o(n log n) time, and thus the o(log n)-bit-key assumption is necessary for overcoming the n log n barrier. Finally, after optimizing the IO efficiency, we show that even the 1-bit special case can solve open questions: our oblivious algorithms solve tight compaction and selection with optimal IO efficiency for the first time.
    [Show full text]
  • CS 758/858: Algorithms
    CS 758/858: Algorithms ■ COVID Prof. Wheeler Ruml Algorithms TA Sumanta Kashyapi This Class Complexity http://www.cs.unh.edu/~ruml/cs758 4 handouts: course info, schedule, slides, asst 1 2 online handouts: programming tips, formulas 1 physical sign-up sheet/laptop (for grades, piazza) Wheeler Ruml (UNH) Class 1, CS 758 – 1 / 25 COVID ■ COVID Algorithms This Class Complexity ■ check your Wildcat Pass before coming to campus ■ if you have concerns, let me know Wheeler Ruml (UNH) Class 1, CS 758 – 2 / 25 ■ COVID Algorithms ■ Algorithms Today ■ Definition ■ Why? ■ The Word ■ The Founder This Class Complexity Algorithms Wheeler Ruml (UNH) Class 1, CS 758 – 3 / 25 Algorithms Today ■ ■ COVID web: search, caching, crypto Algorithms ■ networking: routing, synchronization, failover ■ Algorithms Today ■ machine learning: data mining, recommendation, prediction ■ Definition ■ Why? ■ bioinformatics: alignment, matching, clustering ■ The Word ■ ■ The Founder hardware: design, simulation, verification ■ This Class business: allocation, planning, scheduling Complexity ■ AI: robotics, games Wheeler Ruml (UNH) Class 1, CS 758 – 4 / 25 Definition ■ COVID Algorithm Algorithms ■ precisely defined ■ Algorithms Today ■ Definition ■ mechanical steps ■ Why? ■ ■ The Word terminates ■ The Founder ■ input and related output This Class Complexity What might we want to know about it? Wheeler Ruml (UNH) Class 1, CS 758 – 5 / 25 Why? ■ ■ COVID Computer scientist 6= programmer Algorithms ◆ ■ Algorithms Today understand program behavior ■ Definition ◆ have confidence in results, performance ■ Why? ■ The Word ◆ know when optimality is abandoned ■ The Founder ◆ solve ‘impossible’ problems This Class ◆ sets you apart (eg, Amazon.com) Complexity ■ CPUs aren’t getting faster ■ Devices are getting smaller ■ Software is the differentiator ■ ‘Software is eating the world’ — Marc Andreessen, 2011 ■ Everything is computation Wheeler Ruml (UNH) Class 1, CS 758 – 6 / 25 The Word: Ab¯u‘Abdall¯ah Muh.ammad ibn M¯us¯aal-Khw¯arizm¯ı ■ COVID 780-850 AD Algorithms Born in Uzbekistan, ■ Algorithms Today worked in Baghdad.
    [Show full text]
  • Sorting Algorithms Correcness, Complexity and Other Properties
    Sorting Algorithms Correcness, Complexity and other Properties Joshua Knowles School of Computer Science The University of Manchester COMP26912 - Week 9 LF17, April 1 2011 The Importance of Sorting Important because • Fundamental to organizing data • Principles of good algorithm design (correctness and efficiency) can be appreciated in the methods developed for this simple (to state) task. Sorting Algorithms 2 LF17, April 1 2011 Every algorithms book has a large section on Sorting... Sorting Algorithms 3 LF17, April 1 2011 ...On the Other Hand • Progress in computer speed and memory has reduced the practical importance of (further developments in) sorting • quicksort() is often an adequate answer in many applications However, you still need to know your way (a little) around the the key sorting algorithms Sorting Algorithms 4 LF17, April 1 2011 Overview What you should learn about sorting (what is examinable) • Definition of sorting. Correctness of sorting algorithms • How the following work: Bubble sort, Insertion sort, Selection sort, Quicksort, Merge sort, Heap sort, Bucket sort, Radix sort • Main properties of those algorithms • How to reason about complexity — worst case and special cases Covered in: the course book; labs; this lecture; wikipedia; wider reading Sorting Algorithms 5 LF17, April 1 2011 Relevant Pages of the Course Book Selection sort: 97 (very short description only) Insertion sort: 98 (very short) Merge sort: 219–224 (pages on multi-way merge not needed) Heap sort: 100–106 and 107–111 Quicksort: 234–238 Bucket sort: 241–242 Radix sort: 242–243 Lower bound on sorting 239–240 Practical issues, 244 Some of the exercise on pp.
    [Show full text]
  • An Evolutionary Approach for Sorting Algorithms
    ORIENTAL JOURNAL OF ISSN: 0974-6471 COMPUTER SCIENCE & TECHNOLOGY December 2014, An International Open Free Access, Peer Reviewed Research Journal Vol. 7, No. (3): Published By: Oriental Scientific Publishing Co., India. Pgs. 369-376 www.computerscijournal.org Root to Fruit (2): An Evolutionary Approach for Sorting Algorithms PRAMOD KADAM AND Sachin KADAM BVDU, IMED, Pune, India. (Received: November 10, 2014; Accepted: December 20, 2014) ABstract This paper continues the earlier thought of evolutionary study of sorting problem and sorting algorithms (Root to Fruit (1): An Evolutionary Study of Sorting Problem) [1]and concluded with the chronological list of early pioneers of sorting problem or algorithms. Latter in the study graphical method has been used to present an evolution of sorting problem and sorting algorithm on the time line. Key words: Evolutionary study of sorting, History of sorting Early Sorting algorithms, list of inventors for sorting. IntroDUCTION name and their contribution may skipped from the study. Therefore readers have all the rights to In spite of plentiful literature and research extent this study with the valid proofs. Ultimately in sorting algorithmic domain there is mess our objective behind this research is very much found in documentation as far as credential clear, that to provide strength to the evolutionary concern2. Perhaps this problem found due to lack study of sorting algorithms and shift towards a good of coordination and unavailability of common knowledge base to preserve work of our forebear platform or knowledge base in the same domain. for upcoming generation. Otherwise coming Evolutionary study of sorting algorithm or sorting generation could receive hardly information about problem is foundation of futuristic knowledge sorting problems and syllabi may restrict with some base for sorting problem domain1.
    [Show full text]
  • CS 106B Lecture 11: Sorting Friday, October 21, 2016
    CS 106B Lecture 11: Sorting Friday, October 21, 2016 Programming Abstractions Fall 2016 Stanford University Computer Science Department Lecturer: Chris Gregg reading: Programming Abstractions in C++, Section 10.2 Today's Topics •Logistics •We will have a midterm review, TBA •Midterm materials (old exams, study guides, etc.) will come out next week. •Throwing a string error •Recursive != exponential computational complexity •Sorting •Insertion Sort •Selection Sort •Merge Sort •Quicksort •Other sorts you might want to look at: •Radix Sort •Shell Sort •Tim Sort •Heap Sort (we will cover heaps later in the course) •Bogosort Throwing an Exception • For the Serpinski Triangle option in MetaAcademy assignment says, "If the order passed is negative, your function should throw a string exception." • What does it mean to "throw a string exception?" • An "exception" is your program's way of pulling the fire alarm — something drastic happened, and your program does not like it. In fact, if an exception is not "handled" by the rest of your program, it crashes! It is not a particularly graceful way to handle bad input from the user. • If you were going to use your Serpinski function in a program you wrote, you would want to check the input before calling the function — in other words, make sure your program never creates a situation where a function needs to throw an exception. Throwing an Exception • To throw an exception: throw("Illegal level: Serpinski level must be non-negative."); • This will crash the program. • You can "catch" exceptions that have been thrown by other functions, but that is beyond our scope — see here for details: https://www.tutorialspoint.com/ cplusplus/cpp_exceptions_handling.htm (that is a creepy hand) Recursion != Exponential Computational Complexity • In the previous lecture, we discussed the recursive fibonacci function: long fibonacci(int n) { • This function happens to have if (n == 0) { return 0; exponential computational complexity } if (n == 1) { because each recursive call has two return 1; } additional recursions.
    [Show full text]
  • Parallel Sorting Algorithms + Topic Overview
    + Design of Parallel Algorithms Parallel Sorting Algorithms + Topic Overview n Issues in Sorting on Parallel Computers n Sorting Networks n Bubble Sort and its Variants n Quicksort n Bucket and Sample Sort n Other Sorting Algorithms + Sorting: Overview n One of the most commonly used and well-studied kernels. n Sorting can be comparison-based or noncomparison-based. n The fundamental operation of comparison-based sorting is compare-exchange. n The lower bound on any comparison-based sort of n numbers is Θ(nlog n) . n We focus here on comparison-based sorting algorithms. + Sorting: Basics What is a parallel sorted sequence? Where are the input and output lists stored? n We assume that the input and output lists are distributed. n The sorted list is partitioned with the property that each partitioned list is sorted and each element in processor Pi's list is less than that in Pj's list if i < j. + Sorting: Parallel Compare Exchange Operation A parallel compare-exchange operation. Processes Pi and Pj send their elements to each other. Process Pi keeps min{ai,aj}, and Pj keeps max{ai, aj}. + Sorting: Basics What is the parallel counterpart to a sequential comparator? n If each processor has one element, the compare exchange operation stores the smaller element at the processor with smaller id. This can be done in ts + tw time. n If we have more than one element per processor, we call this operation a compare split. Assume each of two processors have n/p elements. n After the compare-split operation, the smaller n/p elements are at processor Pi and the larger n/p elements at Pj, where i < j.
    [Show full text]
  • Investigating the Effect of Implementation Languages and Large Problem Sizes on the Tractability and Efficiency of Sorting Algorithms
    International Journal of Engineering Research and Technology. ISSN 0974-3154, Volume 12, Number 2 (2019), pp. 196-203 © International Research Publication House. http://www.irphouse.com Investigating the Effect of Implementation Languages and Large Problem Sizes on the Tractability and Efficiency of Sorting Algorithms Temitayo Matthew Fagbola and Surendra Colin Thakur Department of Information Technology, Durban University of Technology, Durban 4000, South Africa. ORCID: 0000-0001-6631-1002 (Temitayo Fagbola) Abstract [1],[2]. Theoretically, effective sorting of data allows for an efficient and simple searching process to take place. This is Sorting is a data structure operation involving a re-arrangement particularly important as most merge and search algorithms of an unordered set of elements with witnessed real life strictly depend on the correctness and efficiency of sorting applications for load balancing and energy conservation in algorithms [4]. It is important to note that, the rearrangement distributed, grid and cloud computing environments. However, procedure of each sorting algorithm differs and directly impacts the rearrangement procedure often used by sorting algorithms on the execution time and complexity of such algorithm for differs and significantly impacts on their computational varying problem sizes and type emanating from real-world efficiencies and tractability for varying problem sizes. situations [5]. For instance, the operational sorting procedures Currently, which combination of sorting algorithm and of Merge Sort, Quick Sort and Heap Sort follow a divide-and- implementation language is highly tractable and efficient for conquer approach characterized by key comparison, recursion solving large sized-problems remains an open challenge. In this and binary heap’ key reordering requirements, respectively [9].
    [Show full text]
  • Sorting in Linear Time
    Data Structures Sorting in Linear Time 1 Sorting • Internal (external) – All data are held in primary memory during the sorting process. • Comparison Based – Based on pairwise comparisons. – Lower bound on running time is Ω(nlogn). • In Place – Requires very little, O(log n), additional space. • Stable – Preserves the relative ordering of items with equal values. 2 Lower Bounds for Comparison Sorts • Ω(n) to examine all the input. • All sorts seen so far are Ω(n lg n). • We’ll show that Ω(n lg n) is a lower bound for comparison sorts. 3 Lower Bounds for Comparison Sorts • All possible flows of any comparison sort can be modeled by a decision tree. • For example, the decision tree of insertion sort of 3 elements looks like: • There are at least n! leaves, because every permutation appears at least once. 4 Lower Bounds for Comparison Sorts • Theorem: – Any decision tree that sorts n elements has height Ω(n lg n). • Proof: – The number of leaves l ≥ n! – Any binary tree of height h has ≤ 2h leaves. – Hence, n! ≤ l ≤ 2h – Take logs: h ≥ lg(n!) – Use Stirling’s approximation: n! > (n/e)n – We get: 5 Lower Bounds for Comparison Sorts • Lemma: – Any binary tree of height h has ≤ 2h leaves. • Proof: – By induction on h. – Basis: • h = 0. • Tree is just one node, which is a leaf. • 2h = 1. – Inductive step: • Assume true for height = h − 1. • Reduce all leaves and get a tree of height h – 1 • Make the tree full • Then, extend the tree of height h − 1 by adding to each leaf two son leaves.
    [Show full text]
  • Lecture 8.Key
    CSC 391/691: GPU Programming Fall 2015 Parallel Sorting Algorithms Copyright © 2015 Samuel S. Cho Sorting Algorithms Review 2 • Bubble Sort: O(n ) 2 • Insertion Sort: O(n ) • Quick Sort: O(n log n) • Heap Sort: O(n log n) • Merge Sort: O(n log n) • The best we can expect from a sequential sorting algorithm using p processors (if distributed evenly among the n elements to be sorted) is O(n log n) / p ~ O(log n). Compare and Exchange Sorting Algorithms • Form the basis of several, if not most, classical sequential sorting algorithms. • Two numbers, say A and B, are compared between P0 and P1. P0 P1 A B MIN MAX Bubble Sort • Generic example of a “bad” sorting 0 1 2 3 4 5 algorithm. start: 1 3 8 0 6 5 0 1 2 3 4 5 Algorithm: • after pass 1: 1 3 0 6 5 8 • Compare neighboring elements. • Swap if neighbor is out of order. 0 1 2 3 4 5 • Two nested loops. after pass 2: 1 0 3 5 6 8 • Stop when a whole pass 0 1 2 3 4 5 completes without any swaps. after pass 3: 0 1 3 5 6 8 0 1 2 3 4 5 • Performance: 2 after pass 4: 0 1 3 5 6 8 Worst: O(n ) • 2 • Average: O(n ) fin. • Best: O(n) "The bubble sort seems to have nothing to recommend it, except a catchy name and the fact that it leads to some interesting theoretical problems." - Donald Knuth, The Art of Computer Programming Odd-Even Transposition Sort (also Brick Sort) • Simple sorting algorithm that was introduced in 1972 by Nico Habermann who originally developed it for parallel architectures (“Parallel Neighbor-Sort”).
    [Show full text]
  • Counting Sort and Radix Sort
    Counting sort and radix sort Nick Smallbone December 11, 2018 Radix sort is the oldest sorting algorithm which is still in general use; it predates the computer. On the left is Herman Hollerith, the inventor of radix sort. In the late 1800s, he built enormous electromechanical machines which used radix sort to tabulate US census data; he set up a company to sell the machines, and the company later became IBM. On the right you can see a 1950s IBM sorting machine running the radix sort algorithm. Radix sort is unlike the sorting algorithms we have seen so far, in that it is not based on comparisons (≤). However, it only works on certain kinds of data – it is mainly used on numerical data, and is often faster than comparison-based sorting algorithms. Before seeing radix sort, we will look at a simpler algorithm, counting sort. 1 Counting sort #1: sorting a list of digits Suppose you want to sort the first 100 digits of π, in ascending order, by hand. How would you do it? 31415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679 Counting sort is one approach to sorting lists of digits. It can be carried out by hand or on a computer. The idea is like this. First we count how many times each digit occurs in the input data (in this case, the first 100 digits of π): Digit 0 1 2 3 4 5 6 7 8 9 Number of occurrences 8 8 12 12 10 8 9 8 12 14 We can compute this table while looking through the input list only once, by keeping a running total for each digit.
    [Show full text]
  • Tutorial 2: Heapsort, Quicksort, Counting Sort, Radix Sort
    Tutorial 2: Heapsort, Quicksort, Counting Sort, Radix Sort Mayank Saksena September 20, 2006 1 Heapsort We review Heapsort, and prove some loop invariants for it. For further information, see Chapter 6 of Introduction to Algorithms. HEAPSORT(A) 1 BUILD-MAX-HEAP(A) Ð eÒg Øh A 2 for i = ( ) downto 2 A i 3 swap A[1] and [ ] ×iÞ e A heaÔ ×iÞ e A 4 heaÔ- ( )= - ( ) 1 5 MAX-HEAPIFY(A; 1) BUILD-MAX-HEAP(A) ×iÞ e A Ð eÒg Øh A 1 heaÔ- ( )= ( ) Ð eÒg Øh A = 2 for i = ( ) 2 downto 1 3 MAX-HEAPIFY(A; i) MAX-HEAPIFY(A; i) i 1 Ð =LEFT( ) i 2 Ö =RIGHT( ) heaÔ ×iÞ e A A Ð > A i Ð aÖ g eר Ð 3 if Ð - ( ) and ( ) ( ) then = i 4 else Ð aÖ g eר = heaÔ ×iÞ e A A Ö > A Ð aÖ g eר Ð aÖ g eר Ö 5 if Ö - ( ) and ( ) ( ) then = i 6 if Ð aÖ g eר = i A Ð aÖ g eר 7 swap A[ ] and [ ] 8 MAX-HEAPIFY(A; Ð aÖ g eר) 1 Loop invariants First, assume that MAX-HEAPIFY(A; i) is correct, i.e., that it makes the subtree with A root i a max-heap. Under this assumption, we prove that BUILD-MAX-HEAP( ) is correct, i.e., that it makes A a max-heap. A We show: at the start of iteration i of the for-loop of BUILD-MAX-HEAP( ) (line ; i ;:::;Ò 2), each of the nodes i +1 +2 is the root of a max-heap.
    [Show full text]