Sorting: From Theory to Practice Sorting out sorts

 Why do we study sorting?  Simple, O(n2) sorts --- for sorting n elements 2  Because we have to  --- n comparisons, n swaps, easy to code 2 2  Because sorting is beautiful  --- n comparisons, n moves, stable, fast  --- n2 everything, slow, slower, and ugly  Example of algorithm analysis in a simple, useful setting

 Divide and conquer faster sorts: O(n log n) for n elements  There are n sorting algorithms, how many should we study?  Quick sort: fast in practice, O(n2) worst case  O(n), O(log n), …  : good worst case, great for linked lists, uses  Why do we study more than one algorithm? extra storage for vectors/arrays • Some are good, some are bad, some are very, very sad  Other sorts: • Paradigms of trade-offs and algorithmic design  Heap sort, basically priority queue sorting  Which is best?  : doesn’t compare keys, uses digits/characters  Which sort should you call from code you write?  Shell sort: quasi-insertion, fast in practice, non-recursive

CPS 100 13.1 CPS 100 13.2

Selection sort: summary Insertion Sort: summary

 Simple to code n2 sort: n2 comparisons, n swaps  Stable sort, O(n2), good on nearly sorted vectors  Stable sorts maintain order of equal keys void selectSort(String[] a)  Good for sorting on two criteria: name, then age { for(int k=0; k < a.length; k++){ void insertSort(String[] a) int minIndex = findMin(a,k); { int k, loc; string elt; for(k=1; k < a.length; k++) { swap(a,k,minIndex); elt = a[k]; } loc = k; } // shift until spot for elt is found while (0 < loc && elt.compareTo(a[loc-1]) < 0) n { a[loc] = a[loc-1]; // shift right  # comparisons:  k = 1 + 2 + … + n = n(n+1)/2 = O(n2) loc=loc-1; k=1 }  Swaps? a[loc] = elt; Sorted, won’t move  Invariant: ????? } final position } Sorted relative to ????? each other

CPS 100 13.3 CPS 100 13.4 Bubble sort: summary of a dog Summary of simple sorts

 For completeness you should know about this sort  Selection sort has n swaps, good for “heavy” data  Few, if any, redeeming features. Really slow, really, really  moving objects with lots of state, e.g., …  Can code to recognize already sorted vector (see insertion) • In C or C++ this is an issue • Not worth it for bubble sort, much slower than insertion • In Java everything is a pointer/reference, so swapping is fast since it's pointer assignment void bubbleSort(String[] a) {  Insertion sort is good on nearly sorted data, it’s stable, it’s fast for(int j=a.length-1; j >= 0; j--) {  Also foundation for Shell sort, very fast non-recursive for(int k=0; k < j; k++) {  More complicated to code, but relatively simple, and fast if (a[k] > a[k+1]) swap(a,k,k+1); }  Bubble sort is a travesty? But it's fast to code if you know it! Sorted, in final } ?????  Can be parallelized, but on one machine don’t go near it position } (see quotes at end of slides)  “bubble” elements down the vector/array

CPS 100 13.5 CPS 100 13.6

Quicksort: fast in practice Partition code for

  Invented in 1962 by C.A.R. Hoare, didn’t understand recursion what we want Easy to develop partition  Worst case is O(n2), but avoidable in nearly all cases <= pivot > pivot int partition(String[] a, int left, int right) right  In 1997 published (Musser, introspective sort) left { • Like quicksort in practice, but recognizes when it will be bad pIndex string pivot = a[left]; int k, pIndex = left; and changes to for(k=left+1, k <= right; k++) { what we have if (a[k].compareTo(pivot) <= 0){ void quick(String[], int left, int right) pIndex++; ?????????????? { swap(a,k,pIndex); right } if (left < right) { left } int pivot = partition(a,left,right); swap(a,left,pIndex); quick(a,left,pivot-1); } invariant quick(a,pivot+1, right);  loop invariant: } <= > ???  statement true each time loop } left right test is evaluated, used to verify  Recurrence? <= X X > X correctness of loop pIndex k  Can swap into a[left] before loop pivot index  Nearly sorted data still ok CPS 100 13.7 CPS 100 13.8 Analysis of Quicksort Tail recursion elimination

 Average case and worst case analysis  If the last statement is a recursive call, recursion can be  Recurrence for worst case: T(n) = T(n-1) + T(1) + O(n) replaced with iteration  What about average? T(n) = 2T(n/2) + O(n)  Call cannot be part of an expression  Some compilers do this automatically  Reason informally: void foo(int n) void foo2(int n)  Two calls vector size n/2 { {  Four calls vector size n/4 if (0 < n) { while (0 < n) { System.out.println(n); System.out.println(n);  … How many calls? Work done on each call? foo(n-1); n = n-1; } }  Partition: typically find middle of left, middle, right, swap, go } }  Avoid bad performance on nearly sorted data  What if print and recursive call switched?  In practice: remove some (all?) recursion, avoid lots of “clones”  What about recursive factorial? return n*factorial(n-1);

CPS 100 13.9 CPS 100 13.10

Merge sort: worst case O(n log n) Merge sort: lists or vectors

 Divide and conquer --- recursive sort  Mergesort for vectors

 Divide list/vector into two halves void mergesort(String[] a, int left, int right) • Sort each half { • Merge sorted halves together if (left < right) { int mid = (right+left)/2;  What is complexity of merging two sorted lists? mergesort(a, left, mid);  What is recurrence relation for merge sort as described? mergesort(a, mid+1, right); merge(a,left,mid,right); T(n) = T(n) = 2T(n/2) + O(n) } }  What is advantage of array over linked-list for merge sort?  What’s different when linked lists used?   What about merging, advantage of linked list? Do differences affect complexity? Why?  Array requires auxiliary storage (or very fancy coding)  How does merge work?

CPS 100 13.11 CPS 100 13.12 Mergesort continued Summary of O(n log n) sorts

 Array code for merge isn’t pretty, but it’s not hard  Quicksort is relatively straight-forward to code, very fast  Mergesort itself is elegant  Worst case is very unlikely, but possible, therefore …  But, if lots of elements are equal, performance will be bad void merge(String[] a, • One million integers from range 0 to 10,000 int left, int middle, int right) • How can we change partition to handle this? // pre: left <= middle <= right, // a[left] <= … <= a[middle], // a[middle+1] <= … <= a[right]  Merge sort is stable, it’s fast, good for linked lists, harder to code? // post: a[left] <= … <= a[right]  Worst case performance is O(n log n), compare quicksort  Extra storage for array/vector  Why is this prototype potentially simpler for linked lists?

 What will prototype be? What is complexity?  Heapsort, more complex to code, good worst case, not stable  Basically heap-based priority queue in a vector

CPS 100 13.13 CPS 100 13.14

Sorting in practice Non-comparison-based sorts

 Rarely will you need to roll your own sort, but when you do  lower bound: (n log n) for … comparison based sorts (like 23 34 56 25 44 73 42 26 10 16  What are key issues? searching lower bound)  /radix sort are not- 16 73 44 26 comparison based, faster  If you use a library sort, you need to understand the interface 10 42 23 34 25 56 asymptotically and in practice  In C++ we have STL 01 2 3 4 5 6 789 • STL has sort, and stable_sort 10 42 23 73 34 44 25 56 26 16  sort a vector of ints, all ints in  In C the generic sort is complex to use because arrays are the range 1..100, how? ugly 26  (use extra storage) 16 25 44  In Java guarantees and worst-case are important 10 23 34 42 56 73  radix: examine each digit of • Why won’t quicksort be used? 01 2 3 4 5 6 789 numbers being sorted  One-pass per digit  Comparators permit sorting criteria to change simply 10 16 23 25 26 34 42 44 56 73  Sort based on digit

CPS 100 13.15 CPS 100 13.16