
Concise Notes on Data Structures and Algorithms Merge sort and Quicksort 15 Merge sort and Quicksort 15.1 Introduction The sorting algorithms we have looked at so far are not very fast, except for Shell sort and insertion sort on nearly-sorted lists. In this chapter we consider two of the fastest sorting algorithms known: merge sort and quicksort. 15.2 Merge Sort Merge sort is a divide and conquer algorithm that solves a large problem by dividing it into parts, solving the resulting smaller problems, and then combining these solutions into a solution to the original problem. The strategy of merge sort is to sort halves of a list (recursively) then merge the results into the final sorted list. Merging is a pretty fast operation, and breaking a problem in half repeatedly quickly gets down to lists that are already sorted (lists of length one or zero), so this algorithm performs well. A Ruby implementation of merge sort appears in Figure 1 below. ENGINEERS, UNIVERSITY Potential GRADUATES & SALES for exploration PROFESSIONALS Junior and experienced F/M Total will hire 10,000 people in 2014. Why not you? Are you looking for work in process, electrical or other types of engineering, R&D, sales & marketing or support professions such as information technology? We’re interested in your skills. Join an international leader in the oil, gas and chemical industry by applying at www.careers.total.com More than 700 job openings are now online! Potential Copyright : Total/Corbis forfor ddevelopmentevelopment Download free eBooks at bookboon.com 116 Click on the ad to read more Concise Notes on Data Structures and Algorithms Merge sort and Quicksort def merge_sort(array) merge_into(array.dup, array, 0, array.size) return array end def merge_into(src, dst, lo, hi) return if hi-lo < 2 m = (lo+hi)/2 merge_into(dst, src, lo, m) merge_into(dst, src, m, hi) j, k = lo, m (lo̤hi-1).each do |i| if j < m and k < hi if src[j] < src[k] dst[i] = src[j]; j += 1 else dst[i] = src[k]; k += 1 end elsif j < m dst[i] = src[j]; j += 1 else # k < hi dst[i] = src[k]; k += 1 end end end Figure 1: Merge Sort Merging requires a place to store the result of merging two lists: duplicating the original list provides space for merging. Hence this algorithm duplicates the original list and passes the duplicate to merge_into(). This operation recursively sorts the two halves of the auxiliary list and then merges them back into the original list. Although it is possible to sort and merge in place, or to merge using a list only half the size of the original, the algorithms to do merge sort this way are complicated and have a lot of overhead—it is simpler and faster to use an auxiliary list the size of the original, even though it requires a lot of extra space. In analyzing this algorithm, the measure of the size of the input is, of course, the length of the list sorted, and the operations counted are key comparisons. Key comparison occurs in the merging step: the smallest items in the merged sub-lists are compared, and the smallest is moved into the target list. This step is repeated until one of the sub-lists is exhausted, in which case the remainder of the other sub-list is copied into the target list. Download free eBooks at bookboon.com 117 Concise Notes on Data Structures and Algorithms Merge sort and Quicksort Merging does not always take the same amount of effort: it depends on the contents of the sub-lists. In the best case, the largest element in one sub-list is always smaller than the smallest element in the other (which occurs, for example, when the input list is already sorted). If we make this assumption, along with the simplifying assumption that n = 2k, then the recurrence relation for the number of comparisons in the best case is B(n) = n/2 + 2 ∙ B(n/2) = n/2 + 2 ∙ (n/4 + 2 ∙ B(n/4)) = 2 ∙ n/2 + 4 ∙ B(n/4) = 2 ∙ n/2 + 4 ∙ (n/8 + 2 ∙ B(n/8) = 3 ∙ n/2 + 8 ∙ B(n/8) = … = i ∙ n/2 + 2i ∙ B(n/2i) The initial condition for the best case occurs when n is one or zero, in which case no comparisons are made. If we let n/2i = 1, then i = k = lg n. Substituting this into the equation above, we have B(n) = lg n ∙ n/2 + n ∙ B(1) = (n lg n)/2 Thus, in the best case, merge sort makes only about (n lg n)/2 key comparisons, which is quite fast. It is also obviously in O(n lg n). In the worst case, making the most comparisons occurs when merging two sub-lists such that one is exhausted when there is only one element left in the other. In this case, every merge operation for a target list of size n requires n-1 comparisons. We thus have the following recurrence relation: W(n) = n-1 + 2 ∙ W(n/2) = n-1 + 2 ∙ (n/2-1 + 2 ∙ W(n/4)) = n-1 + n-2 + 4 ∙ W(n/4) = n-1 + n-2 + 4 ∙ (n/4-1 + 2 ∙ W(n/8) = n-1 + n-2 + n-4 + 8 ∙ W(n/8) = … = n-1 + n-2 + n-4 + … + n-2i-1 + 2i ∙ W(n/2i) The initial conditions are the same as before, so we may again let i = lg n to solve this recurrence. i-1 i i W(n) = n-1 + n-2 + n-4 + … + n-2 + 2 ∙ W(n/2 ) lg n - 1 = n-1 + n-2 + n-4 + … + n-2 j = ∑j = 0 to lg n - 1 n - 2 j = ∑j = 0 to lg n - 1 n - ∑j = 0 to lg n - 1 2 lg n - 1 + 1 = n ∑j = 0 to lg n - 1 1 - (2 - 1) = n lg n - n + 1 The worst case behavior of merge sort is thus also in O(n lg n). Download free eBooks at bookboon.com 118 Concise Notes on Data Structures and Algorithms Merge sort and Quicksort As an average case, lets suppose that each comparison of keys from the two sub-lists is equally likely to result in an element from one sub-list being moved into the target list as from the other. This is like flipping coins: it is as likely that the element moved from one sub-list will win the comparison as an element from the other. And like flipping coins, we expect that in the long run, the elements chosen from one list will be about the same as the elements chosen from the other, so that the sub-lists will run out at about the same time. This situation is about the same as the worst case behavior, so on average, merge sort will make about the same number of comparisons as in the worst case. Thus, in all cases, merge sort runs in O(n lg n) time, which means that it is significantly faster than the other sorts we have seen so far. Its major drawback is that it uses O(n) extra memory locations to do its work. 15.3 Quicksort Quicksort was invented by C.A.R. Hoare in 1960, and it is still the fastest algorithm for sorting random data by comparison of keys. A Ruby implementation of quicksort appears in Figure 2 below. Quicksort is a divide and conquer algorithm. It works by selecting a single element in the list, called the pivot element, and rearranging the list so that all elements less than or equal to the pivot are to its left, and all elements greater than or equal to it are to its right. This operation is calledpartitioning . Once a list is partitioned, the algorithm calls itself recursively to sort the sub-lists left and right of the pivot. Eventually, the sub-lists have length one or zero, at which point they are sorted, ending the recursion. www.sylvania.com We do not reinvent the wheel we reinvent light. Fascinating lighting offers an infinite spectrum of possibilities: Innovative technologies and new markets provide both opportunities and challenges. An environment in which your expertise is in high demand. Enjoy the supportive working atmosphere within our global group and benefit from international career paths. Implement sustainable ideas in close cooperation with other specialists and contribute to influencing our future. Come and join us in reinventing light every day. Light is OSRAM Download free eBooks at bookboon.com 119 Click on the ad to read more Concise Notes on Data Structures and Algorithms Merge sort and Quicksort The heart of quicksort is the partitioning algorithm. This algorithm must choose a pivot element and then rearrange the list as quickly as possible so that the pivot element is in its final position, all values greater than the pivot are to its right, and all values less than it are to its left. Although there are many variations of this algorithm, the general approach is to choose an arbitrary element as the pivot, scan from the left until a value greater than the pivot is found, and from the right until a value less than the pivot is found. These values are then swapped, and the scans resume. The pivot element belongs in the position where the scans meet. Although it seems very simple, the quicksort partitioning algorithm is quite subtle and hard to get right. For this reason, it is generally a good idea to copy it from a source that has tested it extensively.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-