Galloping in Natural Merge Sorts

Total Page:16

File Type:pdf, Size:1020Kb

Galloping in Natural Merge Sorts Galloping in natural merge sorts Vincent Jugé Université Gustave Eiffel, LIGM (UMR 8049), CNRS, ENPC, ESIEE Paris, UPEM Ghazal Khalighinejad Sharif University of Technology, Tehran, Iran Abstract We study the algorithm TimSort and the sub-routine it uses to merge monotonic (non-decreasing) sub-arrays, hereafter called runs. More precisely, we look at the impact on the number of element comparisons performed of using this sub-routine instead of a naive routine. In this article, we introduce a new object for measuring the complexity of arrays. This notion dual to the notion of runs on which TimSort built its success so far, hence we call it dual runs. It induces complexity measures that are dual to those induced by runs. We prove, for this new complexity measure, results that are similar to those already known when considering standard run-induced measures. Although our new results do not lead to any improvement on the number of element moves performed, they may lead to dramatic improvements on the number of element comparisons performed by the algorithm. In order to do so, we introduce new notions of fast- and middle-growth for natural merge sorts, which allow deriving the same upper bounds. After using these notions successfully on TimSort, we prove that they can be applied to a wealth of variants of TimSort and other natural merge sorts. 2012 ACM Subject Classification Theory of computation → Sorting and searching Keywords and phrases Sorting algorithms, Merge sorting algorithms, TimSort, ShiversSort, Analysis of algorithms 1 Introduction In 2002, Tim Peters, a software engineer, created a new sorting algorithm, which was called TimSort [10]. This algorithm immediately demonstrated its efficiency for sorting actual data, and was adopted as the standard sorting algorithm in core libraries of wide-spread programming languages such as Python and Java. Hence, the prominence of such a custom- made algorithm over previously preferred optimal algorithms contributed to the regain of interest in the study of sorting algorithms. Among the best-identified reasons behind the success of TimSort are the fact that this arXiv:2012.03996v1 [cs.DS] 7 Dec 2020 algorithm is well adapted to the architecture of computers (e.g., for dealing with cache issues) and to realistic distributions of data. In particular, the very conception of TimSort makes it particularly well-suited to sorting data whose run decompositions [3, 5] (see Figure 1) are simple. Such decompositions were already used by Knuth NaturalMergeSort [7, Section 5.2.4], which predated TimSort, and adapted the traditional MergeSort algorithm as follows: NaturalMergeSort is based on splitting arrays into monotonic subsequences, also called runs, and on merging these runs together. Thus, all algorithms sharing this feature of NaturalMergeSort are also called natural merge sorts. In addition to being a natural merge sort, TimSort also includes many optimisations, which were carefully engineered, through extensive testing, to offer the best complexity performances. As a result, the general structure of TimSort can be split into three main components: (i) a variant of an insertion sort, which is used to deal with small runs (e.g., 2 Galloping in natural merge sorts S =( 12, 7, 6, 5, 5, 7, 14, 36, 3, 3, 5, 21, 21, 20, 8, 5, 1 ) |first{z run} |second{z run} | third{z run } |fourth{z run} Figure 1 A sequence and its run decomposition computed by a greedy algorithm: for each run, the first two elements determine if it is non-decreasing or decreasing, then it continues with the maximum number of consecutive elements that preserves the monotonicity. runs of length less than 32), (ii) a simple policy for choosing which large runs to merge, (iii) a sub-routine for merging these runs. The second component has been subject to an intense scrutiny these last few years, thereby giving birth to a great variety of TimSort-like algorithms. On the contrary, the first and third components, which seem more complicated and whose effect may be harder to quantify, have often been used as black boxes when studying TimSort or designing variants thereof. Context and related work The success of TimSort has nurtured the interest in the quest for sorting algorithms that would be adapted to arrays with few runs. However, the ad hoc conception of TimSort made its complexity analysis less easy than what one might have hoped, and it is only in 2015, a decade after TimSort had been largely deployed, that Auger et al. proved that TimSort required O(n log(n)) comparisons for sorting arrays of length n [2]. This is optimal in the model of sorting by comparisons, if the input array can be an arbitrary array of length n. However, taking into account the run decompositions of the input array allows using finer-grained complexity classes, as follows. First, one may consider only arrays whose run decomposition consists of ρ monotonic runs. On such arrays, the best worst-case time complexity one may hope for is O(n log(ρ)) [8]. Second, we may consider even more restricted classes of input arrays, and focus only on those arrays that consist of ρ runs of lengths r1,...,rρ. In that case, the best worst-case time complexity is O(n + nH), ρ where H is defined as H = H(r1/n,...,rρ/n) and H(x1,...,xρ)= − Pi=1 xi log2(xi) is the general entropy function [3, 6]. TimSort enjoys such a O(nH) time complexity [1]. In fact, since TimSort was invented, several natural merge sorts have been proposed, all of which were meant to offer easy- to-prove complexity guarantees. Such algorithms include ShiversSort [11], which runs in time O(n log(n)); α-StackSort [2], which, like NaturalMergeSort, runs in time O(n log(ρ)); α-MergeSort [4], PeekSort and PowerSort [9], and the most recent adaptive ShiversSort [6], which, like TimSort, run in time O(n + nH). Except TimSort, these algorithms are, in fact, described only as policies for merging runs, the actual sub-routine used for merging runs being left implicit. In practice, choosing a naive merging sub-routine does not harm the worst-case time complexities considered above. As a consequence, all authors identified the cost of merging two runs of lengths m and n with the sum m + n, and the complexity of the algorithm with the sum of the costs of the merges processed. One notable exception is that of [9], whose authors compare the running times of TimSort and of TimSort’s variant where the merging sub-routine is replaced by a naive routine. While the arrays on which the performance comparisons are performed are chosen to have a low entropy H, the authors did not try to identify which arrays could benefit the most from TimSort’s sub-routine. As a result, they unsurprisingly observed that TimSort’s complex sub- routine seemed less efficient than the naive one, but their work suggests another approach: V. Jugé and G. Khalighinejad 3 finding distributions on arrays for which TimSort’s merging sub-routine will actually be helpful. Contributions We study the time complexity of TimSort and its variants in a context where we refine the family of arrays we consider. This refinement is based on notion of complexity about the input arrays that is dual to the decomposition of arrays into monotonic runs. Consider an array A whose values A[1], A[2],...,A[n] are pairwise distinct integers between 1 and n: we identify A with a permutation of the set {1, 2,...,n}. In the standard literature, we subdivide A into distinct increasing runs, i.e., we partition the set {1, 2,...,n} into intervals R1, R2,...,Rρ such that the function x 7→ A[x] is increasing on each interval Ri. A dual approach would consist in partitioning that set into intervals S1,S2,...,Sσ, which we call dual runs below, such that the function x 7→ A−1[x] is increasing on each interval ∗ Sj . It would then be feasible to sort the array A in time O(n log(σ)), or even O(n + nH ), ∗ where our intervals have lengths s1,s2,...,sσ and H = H(s1/n,...,sσ/n). In this preliminary version, we prove that, thanks to its merging sub-routine, TimSort requires O(n log σ) comparisons. In a subsequent version of this paper, we will further prove that TimSort requires only O(nH∗) comparisons. 2 TimSort and its sub-routine for merging runs In this section, we describe the sorting algorithm TimSort and the components it consists of. As mentioned in Section 1, TimSort is based on decomposing its input array into monotonic (non-decreasing) runs, which are then merged together into larger monotonic runs. Following the literature on this subject, we first focus on the policy used by TimSort to decide which runs to merge. In order to do so, each monotonic run is identified to a pair of pointers to its first and last entries in the array. TimSort discovers its runs on-the-fly, from left to right, makes them non-decreasing, and inserts them into a stack, which is used to decide which runs should be merged. These actions are performed as described in Algorithm 1. This description of TimSort relies on two black-box sub-routines for (i) finding a new run R and making it non-decreasing, and (ii) merging two runs. The first sub-routine works as follows. New runs in the input array are found by using a naive (yet optimal) algorithm, where r comparisons are used to detect a new run R of length r. If r is smaller than a given threshold, the run R is made artificially longer by absorbing elements to its right.
Recommended publications
  • Sort Algorithms 15-110 - Friday 2/28 Learning Objectives
    Sort Algorithms 15-110 - Friday 2/28 Learning Objectives • Recognize how different sorting algorithms implement the same process with different algorithms • Recognize the general algorithm and trace code for three algorithms: selection sort, insertion sort, and merge sort • Compute the Big-O runtimes of selection sort, insertion sort, and merge sort 2 Search Algorithms Benefit from Sorting We use search algorithms a lot in computer science. Just think of how many times a day you use Google, or search for a file on your computer. We've determined that search algorithms work better when the items they search over are sorted. Can we write an algorithm to sort items efficiently? Note: Python already has built-in sorting functions (sorted(lst) is non-destructive, lst.sort() is destructive). This lecture is about a few different algorithmic approaches for sorting. 3 Many Ways of Sorting There are a ton of algorithms that we can use to sort a list. We'll use https://visualgo.net/bn/sorting to visualize some of these algorithms. Today, we'll specifically discuss three different sorting algorithms: selection sort, insertion sort, and merge sort. All three do the same action (sorting), but use different algorithms to accomplish it. 4 Selection Sort 5 Selection Sort Sorts From Smallest to Largest The core idea of selection sort is that you sort from smallest to largest. 1. Start with none of the list sorted 2. Repeat the following steps until the whole list is sorted: a) Search the unsorted part of the list to find the smallest element b) Swap the found element with the first unsorted element c) Increment the size of the 'sorted' part of the list by one Note: for selection sort, swapping the element currently in the front position with the smallest element is faster than sliding all of the numbers down in the list.
    [Show full text]
  • Overview Parallel Merge Sort
    CME 323: Distributed Algorithms and Optimization, Spring 2015 http://stanford.edu/~rezab/dao. Instructor: Reza Zadeh, Matriod and Stanford. Lecture 4, 4/6/2016. Scribed by Henry Neeb, Christopher Kurrus, Andreas Santucci. Overview Today we will continue covering divide and conquer algorithms. We will generalize divide and conquer algorithms and write down a general recipe for it. What's nice about these algorithms is that they are timeless; regardless of whether Spark or any other distributed platform ends up winning out in the next decade, these algorithms always provide a theoretical foundation for which we can build on. It's well worth our understanding. • Parallel merge sort • General recipe for divide and conquer algorithms • Parallel selection • Parallel quick sort (introduction only) Parallel selection involves scanning an array for the kth largest element in linear time. We then take the core idea used in that algorithm and apply it to quick-sort. Parallel Merge Sort Recall the merge sort from the prior lecture. This algorithm sorts a list recursively by dividing the list into smaller pieces, sorting the smaller pieces during reassembly of the list. The algorithm is as follows: Algorithm 1: MergeSort(A) Input : Array A of length n Output: Sorted A 1 if n is 1 then 2 return A 3 end 4 else n 5 L mergeSort(A[0, ..., 2 )) n 6 R mergeSort(A[ 2 , ..., n]) 7 return Merge(L, R) 8 end 1 Last lecture, we described one way where we can take our traditional merge operation and translate it into a parallelMerge routine with work O(n log n) and depth O(log n).
    [Show full text]
  • Data Structures & Algorithms
    DATA STRUCTURES & ALGORITHMS Tutorial 6 Questions SORTING ALGORITHMS Required Questions Question 1. Many operations can be performed faster on sorted than on unsorted data. For which of the following operations is this the case? a. checking whether one word is an anagram of another word, e.g., plum and lump b. findin the minimum value. c. computing an average of values d. finding the middle value (the median) e. finding the value that appears most frequently in the data Question 2. In which case, the following sorting algorithm is fastest/slowest and what is the complexity in that case? Explain. a. insertion sort b. selection sort c. bubble sort d. quick sort Question 3. Consider the sequence of integers S = {5, 8, 2, 4, 3, 6, 1, 7} For each of the following sorting algorithms, indicate the sequence S after executing each step of the algorithm as it sorts this sequence: a. insertion sort b. selection sort c. heap sort d. bubble sort e. merge sort Question 4. Consider the sequence of integers 1 T = {1, 9, 2, 6, 4, 8, 0, 7} Indicate the sequence T after executing each step of the Cocktail sort algorithm (see Appendix) as it sorts this sequence. Advanced Questions Question 5. A variant of the bubble sorting algorithm is the so-called odd-even transposition sort . Like bubble sort, this algorithm a total of n-1 passes through the array. Each pass consists of two phases: The first phase compares array[i] with array[i+1] and swaps them if necessary for all the odd values of of i.
    [Show full text]
  • Selected Sorting Algorithms
    Selected Sorting Algorithms CS 165: Project in Algorithms and Data Structures Michael T. Goodrich Some slides are from J. Miller, CSE 373, U. Washington Why Sorting? • Practical application – People by last name – Countries by population – Search engine results by relevance • Fundamental to other algorithms • Different algorithms have different asymptotic and constant-factor trade-offs – No single ‘best’ sort for all scenarios – Knowing one way to sort just isn’t enough • Many to approaches to sorting which can be used for other problems 2 Problem statement There are n comparable elements in an array and we want to rearrange them to be in increasing order Pre: – An array A of data records – A value in each data record – A comparison function • <, =, >, compareTo Post: – For each distinct position i and j of A, if i < j then A[i] ≤ A[j] – A has all the same data it started with 3 Insertion sort • insertion sort: orders a list of values by repetitively inserting a particular value into a sorted subset of the list • more specifically: – consider the first item to be a sorted sublist of length 1 – insert the second item into the sorted sublist, shifting the first item if needed – insert the third item into the sorted sublist, shifting the other items as needed – repeat until all values have been inserted into their proper positions 4 Insertion sort • Simple sorting algorithm. – n-1 passes over the array – At the end of pass i, the elements that occupied A[0]…A[i] originally are still in those spots and in sorted order.
    [Show full text]
  • Chapter 19 Searching, Sorting and Big —Solutions
    With sobs and tears he sorted out Those of the largest size … —Lewis Carroll Attempt the end, and never stand to doubt; Nothing’s so hard, but search will find it out. —Robert Herrick ’Tis in my memory lock’d, And you yourself shall keep the key of it. —William Shakespeare It is an immutable law in business that words are words, explanations are explanations, promises are promises — but only performance is reality. —Harold S. Green In this Chapter you’ll learn: ■ To search for a given value in an array using linear search and binary search. ■ To sort arrays using the iterative selection and insertion sort algorithms. ■ To sort arrays using the recursive merge sort algorithm. ■ To determine the efficiency of searching and sorting algorithms. © 2010 Pearson Education, Inc., Upper Saddle River, NJ. All Rights Reserved. 2 Chapter 19 Searching, Sorting and Big —Solutions Self-Review Exercises 19.1 Fill in the blanks in each of the following statements: a) A selection sort application would take approximately times as long to run on a 128-element array as on a 32-element array. ANS: 16, because an O(n2) algorithm takes 16 times as long to sort four times as much in- formation. b) The efficiency of merge sort is . ANS: O(n log n). 19.2 What key aspect of both the binary search and the merge sort accounts for the logarithmic portion of their respective Big Os? ANS: Both of these algorithms incorporate “halving”—somehow reducing something by half. The binary search eliminates from consideration one-half of the array after each comparison.
    [Show full text]
  • Lecture 8.Key
    CSC 391/691: GPU Programming Fall 2015 Parallel Sorting Algorithms Copyright © 2015 Samuel S. Cho Sorting Algorithms Review 2 • Bubble Sort: O(n ) 2 • Insertion Sort: O(n ) • Quick Sort: O(n log n) • Heap Sort: O(n log n) • Merge Sort: O(n log n) • The best we can expect from a sequential sorting algorithm using p processors (if distributed evenly among the n elements to be sorted) is O(n log n) / p ~ O(log n). Compare and Exchange Sorting Algorithms • Form the basis of several, if not most, classical sequential sorting algorithms. • Two numbers, say A and B, are compared between P0 and P1. P0 P1 A B MIN MAX Bubble Sort • Generic example of a “bad” sorting 0 1 2 3 4 5 algorithm. start: 1 3 8 0 6 5 0 1 2 3 4 5 Algorithm: • after pass 1: 1 3 0 6 5 8 • Compare neighboring elements. • Swap if neighbor is out of order. 0 1 2 3 4 5 • Two nested loops. after pass 2: 1 0 3 5 6 8 • Stop when a whole pass 0 1 2 3 4 5 completes without any swaps. after pass 3: 0 1 3 5 6 8 0 1 2 3 4 5 • Performance: 2 after pass 4: 0 1 3 5 6 8 Worst: O(n ) • 2 • Average: O(n ) fin. • Best: O(n) "The bubble sort seems to have nothing to recommend it, except a catchy name and the fact that it leads to some interesting theoretical problems." - Donald Knuth, The Art of Computer Programming Odd-Even Transposition Sort (also Brick Sort) • Simple sorting algorithm that was introduced in 1972 by Nico Habermann who originally developed it for parallel architectures (“Parallel Neighbor-Sort”).
    [Show full text]
  • Sorting Algorithm 1 Sorting Algorithm
    Sorting algorithm 1 Sorting algorithm In computer science, a sorting algorithm is an algorithm that puts elements of a list in a certain order. The most-used orders are numerical order and lexicographical order. Efficient sorting is important for optimizing the use of other algorithms (such as search and merge algorithms) that require sorted lists to work correctly; it is also often useful for canonicalizing data and for producing human-readable output. More formally, the output must satisfy two conditions: 1. The output is in nondecreasing order (each element is no smaller than the previous element according to the desired total order); 2. The output is a permutation, or reordering, of the input. Since the dawn of computing, the sorting problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently despite its simple, familiar statement. For example, bubble sort was analyzed as early as 1956.[1] Although many consider it a solved problem, useful new sorting algorithms are still being invented (for example, library sort was first published in 2004). Sorting algorithms are prevalent in introductory computer science classes, where the abundance of algorithms for the problem provides a gentle introduction to a variety of core algorithm concepts, such as big O notation, divide and conquer algorithms, data structures, randomized algorithms, best, worst and average case analysis, time-space tradeoffs, and lower bounds. Classification Sorting algorithms used in computer science are often classified by: • Computational complexity (worst, average and best behaviour) of element comparisons in terms of the size of the list . For typical sorting algorithms good behavior is and bad behavior is .
    [Show full text]
  • Sorting Algorithm 1 Sorting Algorithm
    Sorting algorithm 1 Sorting algorithm A sorting algorithm is an algorithm that puts elements of a list in a certain order. The most-used orders are numerical order and lexicographical order. Efficient sorting is important for optimizing the use of other algorithms (such as search and merge algorithms) which require input data to be in sorted lists; it is also often useful for canonicalizing data and for producing human-readable output. More formally, the output must satisfy two conditions: 1. The output is in nondecreasing order (each element is no smaller than the previous element according to the desired total order); 2. The output is a permutation (reordering) of the input. Since the dawn of computing, the sorting problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently despite its simple, familiar statement. For example, bubble sort was analyzed as early as 1956.[1] Although many consider it a solved problem, useful new sorting algorithms are still being invented (for example, library sort was first published in 2006). Sorting algorithms are prevalent in introductory computer science classes, where the abundance of algorithms for the problem provides a gentle introduction to a variety of core algorithm concepts, such as big O notation, divide and conquer algorithms, data structures, randomized algorithms, best, worst and average case analysis, time-space tradeoffs, and upper and lower bounds. Classification Sorting algorithms are often classified by: • Computational complexity (worst, average and best behavior) of element comparisons in terms of the size of the list (n). For typical serial sorting algorithms good behavior is O(n log n), with parallel sort in O(log2 n), and bad behavior is O(n2).
    [Show full text]
  • From Merge Sort to Timsort Nicolas Auger, Cyril Nicaud, Carine Pivoteau
    Merge Strategies: from Merge Sort to TimSort Nicolas Auger, Cyril Nicaud, Carine Pivoteau To cite this version: Nicolas Auger, Cyril Nicaud, Carine Pivoteau. Merge Strategies: from Merge Sort to TimSort. 2015. hal-01212839v2 HAL Id: hal-01212839 https://hal-upec-upem.archives-ouvertes.fr/hal-01212839v2 Preprint submitted on 9 Dec 2015 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Merge Strategies: from Merge Sort to TimSort Nicolas Auger, Cyril Nicaud, and Carine Pivoteau Universit´eParis-Est, LIGM (UMR 8049), F77454 Marne-la-Vall´ee,France fauger,nicaud,[email protected] Abstract. The introduction of TimSort as the standard algorithm for sorting in Java and Python questions the generally accepted idea that merge algorithms are not competitive for sorting in practice. In an at- tempt to better understand TimSort algorithm, we define a framework to study the merging cost of sorting algorithms that relies on merges of monotonic subsequences of the input. We design a simpler yet competi- tive algorithm in the spirit of TimSort based on the same kind of ideas. As a benefit, our framework allows to establish the announced running time of TimSort, that is, O(n log n).
    [Show full text]
  • Comparison of Parallel Sorting Algorithms
    Comparison of parallel sorting algorithms Darko Božidar and Tomaž Dobravec Faculty of Computer and Information Science, University of Ljubljana, Slovenia Technical report November 2015 TABLE OF CONTENTS 1. INTRODUCTION .................................................................................................... 3 2. THE ALGORITHMS ................................................................................................ 3 2.1 BITONIC SORT ........................................................................................................................ 3 2.2 MULTISTEP BITONIC SORT ................................................................................................. 3 2.3 IBR BITONIC SORT ................................................................................................................ 3 2.4 MERGE SORT .......................................................................................................................... 4 2.5 QUICKSORT ............................................................................................................................ 4 2.6 RADIX SORT ........................................................................................................................... 4 2.7 SAMPLE SORT ........................................................................................................................ 4 3. TESTING ENVIRONMENT .................................................................................... 5 4. RESULTS ................................................................................................................
    [Show full text]
  • Mergesort and Quicksort
    Algorithms ROBERT SEDGEWICK | KEVIN WAYNE 2.2 MERGESORT ‣ mergesort ‣ bottom-up mergesort Algorithms ‣ sorting complexity FOURTH EDITION ‣ comparators ‣ stability ROBERT SEDGEWICK | KEVIN WAYNE http://algs4.cs.princeton.edu Two classic sorting algorithms: mergesort and quicksort Critical components in the world’s computational infrastructure. ・Full scientific understanding of their properties has enabled us to develop them into practical system sorts. ・Quicksort honored as one of top 10 algorithms of 20th century in science and engineering. Mergesort. [this lecture] ... Quicksort. [next lecture] ... 2 2.2 MERGESORT ‣ mergesort ‣ bottom-up mergesort Algorithms ‣ sorting complexity ‣ comparators ‣ stability ROBERT SEDGEWICK | KEVIN WAYNE http://algs4.cs.princeton.edu Mergesort Basic plan. ・Divide array into two halves. ・Recursively sort each half. ・Merge two halves. input M E R G E S O R T E X A M P L E sort left half E E G M O R R S T E X A M P L E sort right half E E G M O R R S A E E L M P T X merge results A E E E E G L M M O P R R S T X Mergesort overview 4 Abstract in-place merge demo Goal. Given two sorted subarrays a[lo] to a[mid] and a[mid+1] to a[hi], replace with sorted subarray a[lo] to a[hi]. lo mid mid+1 hi a[] E E G M R A C E R T sorted sorted 5 Abstract in-place merge demo Goal. Given two sorted subarrays a[lo] to a[mid] and a[mid+1] to a[hi], replace with sorted subarray a[lo] to a[hi].
    [Show full text]
  • Comparison Sorts Name Best Average Worst Memory Stable Method Other Notes Quicksort Is Usually Done in Place with O(Log N) Stack Space
    Comparison sorts Name Best Average Worst Memory Stable Method Other notes Quicksort is usually done in place with O(log n) stack space. Most implementations on typical in- are unstable, as stable average, worst place sort in-place partitioning is case is ; is not more complex. Naïve Quicksort Sedgewick stable; Partitioning variants use an O(n) variation is stable space array to store the worst versions partition. Quicksort case exist variant using three-way (fat) partitioning takes O(n) comparisons when sorting an array of equal keys. Highly parallelizable (up to O(log n) using the Three Hungarian's Algorithmor, more Merge sort worst case Yes Merging practically, Cole's parallel merge sort) for processing large amounts of data. Can be implemented as In-place merge sort — — Yes Merging a stable sort based on stable in-place merging. Heapsort No Selection O(n + d) where d is the Insertion sort Yes Insertion number of inversions. Introsort No Partitioning Used in several STL Comparison sorts Name Best Average Worst Memory Stable Method Other notes & Selection implementations. Stable with O(n) extra Selection sort No Selection space, for example using lists. Makes n comparisons Insertion & Timsort Yes when the data is already Merging sorted or reverse sorted. Makes n comparisons Cubesort Yes Insertion when the data is already sorted or reverse sorted. Small code size, no use Depends on gap of call stack, reasonably sequence; fast, useful where Shell sort or best known is No Insertion memory is at a premium such as embedded and older mainframe applications. Bubble sort Yes Exchanging Tiny code size.
    [Show full text]