
MULTILEVEL SORTING ALGORITHMS LONG CHEN ABSTRACT. Two multilevel sorting algorithms, merge-sort and quick-sort, are briefly dis- cussed in this note. Both of them are divide and conquer algorithms and have average complexity O(N log N) for a list of size N. We shall discuss two multilevel sorting algorithms to sort ascendingly a list of N values represented by an array (or a list) a(1:N). The philosophy is “divide and conquer”. A divide-and-conquer algorithm usually consists of three steps: (1) divide the problem into sub-problems with smaller sizes. (2) solve each sub-problem. (3) merge the solutions of sub-problems. When solving the sub-problems, i.e., in Step 2, the same procedure can be recursively applied which results in a multilevel algorithm. Therefore we only need to describe a two- level method in detail. The dividing step is usually refer to top-to-bottom and the merge step is bottom-to-top. Before we move on to the multilevel algorithms, we review briefly two classic sorting algorithms: insertion sort and bubble sort. Insertion sort inserts elements from the list one by one in their correct position into a new sorted list. It is simple and relatively efficient for small lists. But insertion is expensive, requiring shifting all following elements over by one. Bubble sort compares the adjacent elements and swaps them if they are not in order. It continues doing this for each pair of adjacent elements until no swapping needed. These two algorithms’ average and worst-case complexity is O(N 2). Both of them suffers from the local operations in the finest scale. The merge sort or quick-sort can be thought of as insertion sort or bubble sort applied to multilevel scales, respectively. 1. MERGE SORT 1.1. Algorithm. Merge sort is a natural and intuitive multilevel algorithm. It was discov- ered by von Neumann in 1945 and rediscovered by many researchers. Algorithm: Mergesort (1) Divide the input list into two almost equal-size sub-lists (2) Sort each sub-list (3) Merge two sorted sub-lists to get the sorted list In Step 2, the algorithm can be recursively called to sort each sub-list. The recursion stops when each sub-list contains one element which is considered as sorted. A more efficient criterion is that the size is less than 30 and then simple sort algorithms, e.g., insertion-sort can be applied. By doing this way, it can take the advantage of the speed of insertion sort on small data sets. Date: Sept 1, 2015, updated on September 21, 2015. 1 2 LONG CHEN Dividing step is trivial. Just split the input list into two almost equal-size sublist. Let m = bn=2c. The two sublists are simply a(1:m) and a(m+1,n). The tricky part is the merge step. Given two sorted lists a and b, they can be merged into a sorted list with operations proportional to the size of a and b. A possible implementation is as follows. Maintain two pointers i and j and initialize to 1. The pointer i moves forward (left to right, i.e., 1 to end) in a and stops when a(i)>b(j). Then j moves forward in b until b(j)>a(i). Record the values (before the stop) along the path. Either i or j reaches the end, the merge is finished. The length of the path of i and j is at most the length of the list and thus the complexity of the merge is bounded by length(a) + length(b). The merge step can thought of as an insertion of a list of various lengths not just one by one. Merge-sort requires additional O(N) (memory) space for the sorted output and involves data movement due to the tricky merge part. 1.2. Complexity. Let N be the size of the input list. Then it takes log2 N dividing steps to get sub-lists with one element. In merge phase, merging sub-lists in each level requires N operations. So in total, the complexity of merge-sort is N log2 N sorting algorithm. Another complexity analysis is as follows. Let T (N) be the operation count of merge sort for a list of size N. Then the recurrence T (N) = 2T (N=2) + N follows from the definition. The closed form of T (N) can be obtained accordingly. Exercise 1.1. Implement the merge-sort. 2. QUICK SORT 2.1. Algorithm. Quick sort is invented by Tony Hoare in 1960 and the original algorithm and several of its variations is presented in 1962 [1]. Sedgewick refined and popularized quicksort and analyzed many versions of quicksort in 1978 [3]. Algorithm: Quicksort (1) (a) Choose an element (called a pivot) from the list; (b) Partition the list into left and right sub-lists such that all elements in the left list are smaller than the pivot and all elements in the right list are larger. (2) Sort the left and the right sub-lists. (3) Merge the sorted sub-lists into a sorted list. In Step 2, the algorithm can be recursively called to sort each sub-list. The recursion stops at a certain level. An obvious one is the list contains only one element. A better one is when the size is less than 30. Then use insertion-sort which is faster than quick-sort in this level. Compare with the general three steps of divide-and-conquer, the merging phase is triv- ial. The focus is the dividing. While in the mergesort, the dividing is trival but the merge requires some work. The choice of a pivot is the key. In the worst scenario, the pivot happens to be the largest or the smallest number. Then it is as slow as bubble sort and insertion sort. The ideal pivot would be the median such that the left and right sublists have almost equal size. However, it is not cheap to get the median for an unsorted list. The simplest choice is to select an element from a fixed position of the input list: the first, the last, or the middle item. A better strategy is to approximate the median of a by MULTILEVEL SORTING ALGORITHMS 3 computing the median of a small subset of a. For example, the median-of-three method: chose the median of the left, middle and right of the list. A fair choice is to randomly select an element from the input list. The randomness reduces the average complexity and introduce a core algorithm concept: randomized algorithms. Indeed, many randomized algorithms in computational geometry can be viewed as variations of quick-sort [2]. Partitioning step is straight-forward if an additional array is allocated. We simply scan the input array and save the left list from the left to right (forwards) and the right list backwards in the new array. An in-place (i.e. requiring small O(1) additional space) partition can be realized by swapping [3]. We use two pointers i and j for the left and right list. The pointer i moves forwards and j backwards. i stops when a(i) > p and j stops when a(j) < p. If i < j, then we swap a(i) and a(j) and continue. The partition will be achieved when i > j. The pivot can be first swapped to the leftmost or rightmost location. When the input list contains a lot of duplicated elements, the partition performs not well. A simple fix is a 3-way partition. Just add one sub-list to store all elements equals the pivot value. Because of the partition may reorder elements within a partition, it is not a stable sort, meaning that the relative order of equal sort items is not preserved. 2.2. Complexity. The average complexity of quick-sort is O(N ln N) and the worst case is O(N 2). Here complexity is simply the operation count. The dominate operation could be chosen as comparison or swaps. The speed depends also on other issues, e.g., the data movement and its spatial locality (cache-efficiency). For example, the complexity of the merge-sort is always O(N log N) but in practice quick-sort performs better than merge- sort due to the large data movement and additional space needed in the merge-sort. The worst case is straight-forward. To analyze the average case, we assume the pivot is randomly chosen from the input list. And to simplify the discussion, we assume distinct values, i.e., no duplication in the input list. We first follow Ross [4] to show the average complexity is O(N ln N). Let X denote the number of comparisons. We are interested in the expectation E[X]. Let si be the i-th number in the sorted list, i.e., s1 < s2 < ··· < sN , and let I(i; j) = 1 if si and sj are directly compared, and I(i; j) = 0 otherwise. Using this notation, we can express X as N−1 N X X X = I(i; j) i=1 j=i+1 and the expectation can be simplified as 2N−1 N 3 N−1 N X X X X E[X] = E 4 I(i; j)5 = E [I(i; j)] i=1 j=i+1 i=1 j=i+1 N−1 N X X = Prfsi and sj are comparedg: i=1 j=i+1 To be directly compared, the set of numbers s[i : j] := fsi; si+1; : : : ; sj−1; sjg should be in the same list.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages5 Page
-
File Size-