Utilizing the Linear Diophantine Problem of Frobenius for a Faster Shellsort Sequence

Total Page:16

File Type:pdf, Size:1020Kb

Utilizing the Linear Diophantine Problem of Frobenius for a Faster Shellsort Sequence Utilizing the linear Diophantine problem of Frobenius for a faster Shellsort sequence Dr. Bharti Temkin Maximilian Berger November 27, 2004 Abstract not. Those which can’t be represented with pos- itive integer coefficients can be represented with a negative coefficient: TBD: Abstract of the paper Theorem 2. We assume the two positive inte- ger x1 and x2 are relatively prime. Then, ev- ery positive integer that can not be represented 1 Introduction with positive integer coefficients c1, c2 as a lin- ear combination of x1 and x2: x = c2x2 + c1x1 can be represented with one negative coefficient TBD c11: x = c2x2 − c11x1. TBD: Where is this prooven? 2 The linear Diophantine prob- lem of Frobenius We may safely limit the coefficent c2 to the range [0..x1 − 1]. Should c2 be greater we can increase c1 instead. The linear Diophantine problem of Frobenius is equivalent to the coin exchange problem: What Having done this, we can now represent every is the largest integer that can not be represented integer in a table, using c1 as the x-axis and c2 with positive integers x1... xn > 1 that are rela- as the y axis, as shown in Figure 1. tively prime? 3 8 13 18 23 28 33 38 43 ... This problem has not be solved generally yet. 1 6 11 16 21 26 31 36 ... However, [5] has solved it for the case of n = 2: 4 9 14 19 24 29 ... 2 7 12 17 2 ... Theorem 1. We assume the two positive inte- 0 5 10 15 ... ger x1 and x2 are relatively prime. Then, every integer x > x1x2 −x1 −x2 can be represented as a Figure 1: Tabular display for integers that can be linear combination of x1 and x2: x = c2x2 +c1x1 represented with the positive coefficients 5 and 7 with positive integer coefficients c1, c2. [5] This representation gives us another way to prove Theorem 2. Since c1 and c2 are positive, What about the integers 1leqx ≤ (x1 −1)(x2 −1) the entry at xa, yb must be larger than any entry ? Some of them can still be represented with at xi, yb where i < a and larger than any entry positive integer coefficients, however, some can at xa yj where j < b. 1 The largest number with a negative coefficient by inserting every element into a already sorted c1 must therefore be the one hat has the largest list. k-insertion sort sorts every kth element in possible c2 and the lagerst possible c1. a list. Definition 1. A list of n elements ei is said to be The largest possible c2 is x1 − 1 (as we defined k-orderd, if ei ≤ ei+ck∀1 ≤ i ≤ n, 1 ≤ c, c, i, k ∈ earlier). The largest possible negative c1 is −1. + Therefore, the largest integer that can not be N represented is: x = (x1 − 1)x2 − 1(x1). This is equivalent to x1x2 − x1 − x2. Shellsort calls k-insertion sort with decreasing values of k. To ensure that the list is sorted, The next question is: How many integers x ≤ the last step is k = 1. (x1 −1)(x2 −1) can not be represented with pos- itive coefficients? If we look at the tabular representation again, 4 Shellsort performance with this question is asking how many numbers are relatively prime numbers left of the column of c1 = 0 ? + For c2 this is 0, since 0x2 will always be 0 and We will assume that k, l ∈ N are two relatively there are no positive integers less than 0. prime numbers. If we sort a list of elements ei with a k- and a l-insertion sort, we will get a For any c this is b c2x2 c. In particular, for c = k,l-ordered list with the properties 2 x1 2 (x1−1)x2 x1 − 1 this is b c. x1 ei ≤ ei+c1k If we sum all these up for c2 = [0..x1 − 1] we get and (x −1)(x −1) 1 2 ej ≤ ej+c l 2 . 2 If we set j = i + c1k we can combine these two: TBD: How????? e ≤ e Theorem 3. We assume the two positive integer i i+c1k+c2l x and x are relatively prime. Then (x1−1)(x2−1) 1 2 2 From the Theorem ?? we know that c1k+c2l can integers can not be represented as a linear com- represent every integer x ≥ (k − 1)(l − 1) thus bination of x1 and x2: x = c2x2 + c1x1 with positive integer coefficients c1, c2. ei ≤ ei+(k−1)(l−1)+1 which leads us to the follwing Unfortunately, the current research only gives good explanations for two relatively prime num- Theorem 4. Every element in a k, l-ordered list bers. There are several papers that try to find is less than (k − 1)(l − 1) indexes away from its upper and lower bounds for the general case. So sorted position in a 1-sorted list, if 1 ≤ k < l, k far, no general formula has been found. ?? and l relative prime. Also, if we look at Theorem 3 we cann see that 3 Shellsort Theorem 5. Every element in a k, l-sorted list (k−1)(l−1) has at most 2 elements appearing before Shellsort, as suggested in TBD, is a repeated ver- that should appear later, if 1 ≤ k < l, k and l sion of k-insertion sort. Insertion sort sorts a list relative prime. 2 5 Shellsort performance with 7 Constructing a Shellsequence non relatively prime numbers We now tried to construct a Shell sequences based on assumptions 1 and 2. If we take two numbers k,l, that are not relatively prime, only numbers that have the common fac- To satisfy assumption 2 we construct an ideal tor gcd(k, l) in it can be represented in terms of sequence f(n) with the follwoing properties: k and l. f(1) = 1 (1) Applying this to Shellsort, this means until the f(n) = f(n − 1)c where c > 1 ∈ R (2) very last step, there will always be values that may need to move through the whole list. Shell- To satisfy assumption 1 we define the sequence sorts strenght, however, lies in eliminating those s(n) as follows: s(n) will be the smallest integer elements, so we arrive at the following greater than f(n) that is relatively prime to all s(2)..s(n − 1). Assumption 1. For a Shellsort sequence to be effective, all the numbers have to be relatively prime 8 Findind the best growth fac- tor To find the best growth factor, we will use the 6 Growth of a Shellsort se- above method to construct shell sequences. We will then appply these to sort arrays of different quence sizes that contain random data. For c we used the range 1.50..3.00 in incre- The growth of the sequence is another important ments of 0.05. For the array size, we used factor. If the sequence grows to slow, to many 103, 104, 105, 106. Each sort has been done 5 shellsort passes will be made, leading to unnecce- times to assure accuracy. The results can be sary comparisons, which take to much time. found in Figures 2, 3 and 4. If the sequene grows to fast, the advantages of As you can see from Figure 4, the faster the se- Shellsort are gone, and the behaviour gets closer quence grows, the less overhead is involved, and and close to that of straight insertion sort. the actual algorithm runs faster. This is, how- ever due to the fact that we were doing inte- If we look at effective sequences that can be fond ger comparisons, which are very fast on modern in literature, most of them grow approximately computers, and the management overhead dom- by a factor of 2. No rule for perfect Shellse- inates the runtime. quence growth has been found, but this seems to be good. If comparisons would dominate the runtime, we need to minize those. If we look at Figure 2 we see multiple minima, at c = 2.2, 2.35, 2.45 Assumption 2. The Shell sequence may neither and 2.55. We will therefore examine the range grow to fast nor to slow. A factor close to 2 2.1..2.6 closer, in steps of 0.01. The results can seems to give the best results. be found in Figure 5. 3 c 106 105 104 103 c 106 105 104 103 1.5 148096157 4033761 268781 19761 1.5 296192315 8067523 537563 39522 1.55 94642467 3489655 261369 18845 1.55 189284934 6979310 522738 37691 1.6 66307896 3222457 253077 18089 1.6 132615792 6444915 506154 36179 1.65 51195019 3071179 243260 17250 1.65 102390038 6142358 486520 34501 1.7 43806001 3076849 242129 17040 1.7 87612002 6153698 484259 34081 1.75 38690668 2969786 230646 16381 1.75 77381337 5939572 461293 32762 1.8 36338939 2922294 226307 15934 1.8 72677879 5844589 452615 31869 1.85 34896162 2850479 219403 15491 1.85 69792324 5700958 438807 30983 1.9 35119277 2883186 222804 15491 1.9 70238555 5766372 445608 30983 1.95 34096450 2770747 213018 15108 1.95 68192900 5541494 426036 30216 2 38363880 2872861 213661 14823 2 76727761 5745722 427323 29646 2.05 33867497 2740231 210997 14689 2.05 67734994 5480462 421994 29378 2.1 33571066 2714777 206737 14647 2.1 67142132 5429554 413474 29295 2.15 33354981 2695285 205777 14379 2.15 66709963 5390571 411554 28759 2.2 33162445 2677880 203966 14326 2.2 66324891 5355760 407932 28653 2.25 33428464 2698651 208228 14090 2.25 66856929 5397303 416457 28180 2.3 32995665 2655091 199975 13956 2.3 65991331 5310182 399950 27912 2.35 32921442 2648227 200888 14014 2.35 65842884 5296454 401777 28028 2.4 33329546 2676205 201823 13889 2.4 66659092 5352411 403647 27778 2.45 32853407 2631833 199408 13738 2.45 65706815 5263666 398817 27477 2.5 33138322 2654605 200421 13845 2.5 66276645 5309210 400842 27691 2.55 32992964 2628471 197599 13704 2.55 65985928 5256942 395198 27408 2.6 33033944 2628661 198060 13643 2.6 66067889 5257323 396120 27286 2.65 33684086 2685786 202121 13597 2.65 67368172 5371573 404242 27194 2.7 33978169 2713627 206348 13810 2.7 67956339 5427254 412696 27620 2.75 34582299 2757018 207685 13960 2.75 69164598 5514036 415371 27920 2.8 33991997 2672757 201107 13824 2.8 67983994 5345514 402215 27648 2.85 34088609 2710268 203217 13770 2.85 68177219 5420536 406435 27540 2.9 34198867 2697561 203390 13733 2.9 68397734 5395123 406780 27466 2.95 35282875 2806847 209515 14078 2.95 70565750 5613694 419030 28156 3 43128969 2934564 207531 13542 3 86257939 5869128 415063 27084 Figure 2: Number of data comparisons for dif- Figure 3: Number of data movements for differ- ferent values of c ent values of c
Recommended publications
  • Overview Parallel Merge Sort
    CME 323: Distributed Algorithms and Optimization, Spring 2015 http://stanford.edu/~rezab/dao. Instructor: Reza Zadeh, Matriod and Stanford. Lecture 4, 4/6/2016. Scribed by Henry Neeb, Christopher Kurrus, Andreas Santucci. Overview Today we will continue covering divide and conquer algorithms. We will generalize divide and conquer algorithms and write down a general recipe for it. What's nice about these algorithms is that they are timeless; regardless of whether Spark or any other distributed platform ends up winning out in the next decade, these algorithms always provide a theoretical foundation for which we can build on. It's well worth our understanding. • Parallel merge sort • General recipe for divide and conquer algorithms • Parallel selection • Parallel quick sort (introduction only) Parallel selection involves scanning an array for the kth largest element in linear time. We then take the core idea used in that algorithm and apply it to quick-sort. Parallel Merge Sort Recall the merge sort from the prior lecture. This algorithm sorts a list recursively by dividing the list into smaller pieces, sorting the smaller pieces during reassembly of the list. The algorithm is as follows: Algorithm 1: MergeSort(A) Input : Array A of length n Output: Sorted A 1 if n is 1 then 2 return A 3 end 4 else n 5 L mergeSort(A[0, ..., 2 )) n 6 R mergeSort(A[ 2 , ..., n]) 7 return Merge(L, R) 8 end 1 Last lecture, we described one way where we can take our traditional merge operation and translate it into a parallelMerge routine with work O(n log n) and depth O(log n).
    [Show full text]
  • Lecture 16: Lower Bounds for Sorting
    Lecture Notes CMSC 251 To bound this, recall the integration formula for bounding summations (which we paraphrase here). For any monotonically increasing function f(x) Z Xb−1 b f(i) ≤ f(x)dx: i=a a The function f(x)=xln x is monotonically increasing, and so we have Z n S(n) ≤ x ln xdx: 2 If you are a calculus macho man, then you can integrate this by parts, and if you are a calculus wimp (like me) then you can look it up in a book of integrals Z n 2 2 n 2 2 2 2 x x n n n n x ln xdx = ln x − = ln n − − (2 ln 2 − 1) ≤ ln n − : 2 2 4 x=2 2 4 2 4 This completes the summation bound, and hence the entire proof. Summary: So even though the worst-case running time of QuickSort is Θ(n2), the average-case running time is Θ(n log n). Although we did not show it, it turns out that this doesn’t just happen much of the time. For large values of n, the running time is Θ(n log n) with high probability. In order to get Θ(n2) time the algorithm must make poor choices for the pivot at virtually every step. Poor choices are rare, and so continuously making poor choices are very rare. You might ask, could we make QuickSort deterministic Θ(n log n) by calling the selection algorithm to use the median as the pivot. The answer is that this would work, but the resulting algorithm would be so slow practically that no one would ever use it.
    [Show full text]
  • Advanced Topics in Sorting
    Advanced Topics in Sorting complexity system sorts duplicate keys comparators 1 complexity system sorts duplicate keys comparators 2 Complexity of sorting Computational complexity. Framework to study efficiency of algorithms for solving a particular problem X. Machine model. Focus on fundamental operations. Upper bound. Cost guarantee provided by some algorithm for X. Lower bound. Proven limit on cost guarantee of any algorithm for X. Optimal algorithm. Algorithm with best cost guarantee for X. lower bound ~ upper bound Example: sorting. • Machine model = # comparisons access information only through compares • Upper bound = N lg N from mergesort. • Lower bound ? 3 Decision Tree a < b yes no code between comparisons (e.g., sequence of exchanges) b < c a < c yes no yes no a b c b a c a < c b < c yes no yes no a c b c a b b c a c b a 4 Comparison-based lower bound for sorting Theorem. Any comparison based sorting algorithm must use more than N lg N - 1.44 N comparisons in the worst-case. Pf. Assume input consists of N distinct values a through a . • 1 N • Worst case dictated by tree height h. N ! different orderings. • • (At least) one leaf corresponds to each ordering. Binary tree with N ! leaves cannot have height less than lg (N!) • h lg N! lg (N / e) N Stirling's formula = N lg N - N lg e N lg N - 1.44 N 5 Complexity of sorting Upper bound. Cost guarantee provided by some algorithm for X. Lower bound. Proven limit on cost guarantee of any algorithm for X.
    [Show full text]
  • Optimal Node Selection Algorithm for Parallel Access in Overlay Networks
    1 Optimal Node Selection Algorithm for Parallel Access in Overlay Networks Seung Chul Han and Ye Xia Computer and Information Science and Engineering Department University of Florida 301 CSE Building, PO Box 116120 Gainesville, FL 32611-6120 Email: {schan, yx1}@cise.ufl.edu Abstract In this paper, we investigate the issue of node selection for parallel access in overlay networks, which is a fundamental problem in nearly all recent content distribution systems, grid computing or other peer-to-peer applications. To achieve high performance and resilience to failures, a client can make connections with multiple servers simultaneously and receive different portions of the data from the servers in parallel. However, selecting the best set of servers from the set of all candidate nodes is not a straightforward task, and the obtained performance can vary dramatically depending on the selection result. In this paper, we present a node selection scheme in a hypercube-like overlay network that generates the optimal server set with respect to the worst-case link stress (WLS) criterion. The algorithm allows scaling to very large system because it is very efficient and does not require network measurement or collection of topology or routing information. It has performance advantages in a number of areas, particularly against the random selection scheme. First, it minimizes the level of congestion at the bottleneck link. This is equivalent to maximizing the achievable throughput. Second, it consumes less network resources in terms of the total number of links used and the total bandwidth usage. Third, it leads to low average round-trip time to selected servers, hence, allowing nearby nodes to exchange more data, an objective sought by many content distribution systems.
    [Show full text]
  • CS302 Final Exam, December 5, 2016 - James S
    CS302 Final Exam, December 5, 2016 - James S. Plank Question 1 For each of the following algorithms/activities, tell me its running time with big-O notation. Use the answer sheet, and simply circle the correct running time. If n is unspecified, assume the following: If a vector or string is involved, assume that n is the number of elements. If a graph is involved, assume that n is the number of nodes. If the number of edges is not specified, then assume that the graph has O(n2) edges. A: Sorting a vector of uniformly distributed random numbers with bucket sort. B: In a graph with exactly one cycle, determining if a given node is on the cycle, or not on the cycle. C: Determining the connected components of an undirected graph. D: Sorting a vector of uniformly distributed random numbers with insertion sort. E: Finding a minimum spanning tree of a graph using Prim's algorithm. F: Sorting a vector of uniformly distributed random numbers with quicksort (average case). G: Calculating Fib(n) using dynamic programming. H: Performing a topological sort on a directed acyclic graph. I: Finding a minimum spanning tree of a graph using Kruskal's algorithm. J: Finding the minimum cut of a graph, after you have found the network flow. K: Finding the first augmenting path in the Edmonds Karp implementation of network flow. L: Processing the residual graph in the Ford Fulkerson algorithm, once you have found an augmenting path. Question 2 Please answer the following statements as true or false. A: Kruskal's algorithm requires a starting node.
    [Show full text]
  • Dualheap Sort Algorithm: an Inherently Parallel Generalization of Heapsort
    Dualheap Sort Algorithm: An Inherently Parallel Generalization of Heapsort Greg Sepesi Eduneer, LLC [email protected] 2007 June 20 ABSTRACT The proposed dualheap sort algorithm employs the dualheap selection algorithm [3] to recursively partition A generalization of the heapsort algorithm is proposed. subheaps in half as shown in Figure 1-1. When the At the expense of about 50% more comparison and subheaps become too small to partition any more, the move operations for typical cases, the dualheap sort array is sorted. In the figure, the downward triangles algorithm offers several advantages over heapsort: represent the partitions with larger values, the upward improved cache performance, better performance if triangles represent the partitions with smaller values, the input happens to be already sorted, and easier and the arrow represents the direction of increasing parallel implementations. values and increasing heap indices, indicating that the subheaps with smaller values employ negative indices. 1. ALGORITHM OVERVIEW At the expense of about 50% more comparison and A heap is an array with elements regarded as nodes in move operations for the typical case, the dualheap sort algorithm offers significant improvements to heapsort’s a complete binary tree, where node j is the parent of small but important set of inefficiencies: nodes 2j and 2j+1, and where the value of each parent node is superior to the values of its children. This superiority of all the parent nodes is commonly called • When the input is already sorted, dualheap the heap condition. sort performs zero move operations and just NlogN comparison operations. • Heapsort is an elegant and popular sorting algorithm The dualheap sort has better cache by J.W.J.
    [Show full text]
  • Divide and Conquer CISC4080, Computer Algorithms CIS, Fordham Univ
    Divide and Conquer CISC4080, Computer Algorithms CIS, Fordham Univ. ! Instructor: X. Zhang Acknowledgement • The set of slides have use materials from the following resources • Slides for textbook by Dr. Y. Chen from Shanghai Jiaotong Univ. • Slides from Dr. M. Nicolescu from UNR • Slides sets by Dr. K. Wayne from Princeton • which in turn have borrowed materials from other resources • Other online resources 2 Outline • Sorting problems and algorithms • Divide-and-conquer paradigm • Merge sort algorithm • Master Theorem • recursion tree • Median and Selection Problem • randomized algorithms • Quicksort algorithm • Lower bound of comparison based sorting 3 Sorting Problem • Problem: Given a list of n elements from a totally-ordered universe, rearrange them in ascending order. CS 477/677 - Lecture 1 4 Sorting applications • Straightforward applications: • organize an MP3 library • Display Google PageRank results • List RSS news items in reverse chronological order • Some problems become easier once elements are sorted • identify statistical outlier • binary search • remove duplicates • Less-obvious applications • convex hull • closest pair of points • interval scheduling/partitioning • minimum spanning tree algorithms • … 5 Classification of Sorting Algorithms • Use what operations? • Comparison based sorting: bubble sort, Selection sort, Insertion sort, Mergesort, Quicksort, Heapsort, … • Non-comparison based sort: counting sort, radix sort, bucket sort • Memory (space) requirement: • in place: require O(1), O(log n) memory • out of place:
    [Show full text]
  • 00 Fast K-Selection Algorithms for Graphics Processing Units
    00 Fast K-selection Algorithms for Graphics Processing Units TOLU ALABI, JEFFREY D. BLANCHARD, BRADLEY GORDON, and RUSSEL STEINBACH, Grinnell College Finding the kth largest value in a list of n values is a well-studied problem for which many algorithms have been proposed. A naive approach is to sort the list and then simply select the kth term in the sorted list. However, when the sorted list is not needed, this method has done quite a bit of unnecessary work. Although sorting can be accomplished efficiently when working with a graphics processing unit (GPU), this article proposes two GPU algorithms, radixSelect and bucketSelect, which are several times faster than sorting the vector. As the problem size grows so does the time savings of these algorithms with a sixfold speedup for float vectors larger than 224 and for double vectors larger than 220, ultimately reaching a 19.1 times speed-up for double vectors of length 228. Categories and Subject Descriptors: D.1.3 [Concurrent Programming]: Parallel programming;; F.2.2 [Non-numerical Algorithms and Problems]: Sorting and searching;; G.4 [Mathematical Software]: Parallel and vector implementations;; I.3.1 [Hardware Architecture]: Graphics Processors General Terms: Algorithms, Design, Experimentation, Performance Additional Key Words and Phrases: K-selection, Order Statistics, Multi-core, Graphics Processing Units, GPGPU, CUDA 1. INTRODUCTION The k-selection problem, a well studied problem in computer science, asks one to find the kth largest value in a list of n elements. This problem is often referred to as finding the kth order statistic. This task appears in numerous applications and our motivating example is a thresholding operation, where only the k largest values in a list are retained while the remaining entries are set to zero.
    [Show full text]
  • Introspective Sorting and Selection Algorithms
    Introsp ective Sorting and Selection Algorithms David R Musser Computer Science Department Rensselaer Polytechnic Institute Troy NY mussercsrpiedu Abstract Quicksort is the preferred inplace sorting algorithm in manycontexts since its average computing time on uniformly distributed inputs is N log N and it is in fact faster than most other sorting algorithms on most inputs Its drawback is that its worstcase time bound is N Previous attempts to protect against the worst case by improving the way quicksort cho oses pivot elements for partitioning have increased the average computing time to o muchone might as well use heapsort which has aN log N worstcase time b ound but is on the average to times slower than quicksort A similar dilemma exists with selection algorithms for nding the ith largest element based on partitioning This pap er describ es a simple solution to this dilemma limit the depth of partitioning and for subproblems that exceed the limit switch to another algorithm with a b etter worstcase bound Using heapsort as the stopp er yields a sorting algorithm that is just as fast as quicksort in the average case but also has an N log N worst case time bound For selection a hybrid of Hoares find algorithm which is linear on average but quadratic in the worst case and the BlumFloydPrattRivestTarjan algorithm is as fast as Hoares algorithm in practice yet has a linear worstcase time b ound Also discussed are issues of implementing the new algorithms as generic algorithms and accurately measuring their p erformance in the framework
    [Show full text]
  • Fast Deterministic Selection
    Fast Deterministic Selection Andrei Alexandrescu The D Language Foundation, Washington, USA Abstract The selection problem, in forms such as finding the median or choosing the k top ranked items in a dataset, is a core task in computing with numerous applications in fields as diverse as statist- ics, databases, Machine Learning, finance, biology, and graphics. The selection algorithm Median of Medians, although a landmark theoretical achievement, is seldom used in practice because it is slower than simple approaches based on sampling. The main contribution of this paper is a fast linear-time deterministic selection algorithm MedianOfNinthers based on a refined definition of MedianOfMedians. A complementary algorithm MedianOfExtrema is also proposed. These algorithms work together to solve the selection problem in guaranteed linear time, faster than state-of-the-art baselines, and without resorting to randomization, heuristics, or fallback approaches for pathological cases. We demonstrate results on uniformly distributed random numbers, typical low-entropy artificial datasets, and real-world data. Measurements are open- sourced alongside the implementation at https://github.com/andralex/MedianOfNinthers. 1998 ACM Subject Classification F.2.2 Nonnumerical Algorithms and Problems Keywords and phrases Selection Problem, Quickselect, Median of Medians, Algorithm Engin- eering, Algorithmic Libraries Digital Object Identifier 10.4230/LIPIcs.SEA.2017.24 1 Introduction The selection problem is widely researched and has numerous applications. Selection is finding the kth smallest element (also known as the kth order statistic): given an array A of length |A| = n, a non-strict order ≤ over elements of A, and an index k, the task is to find the element that would be in slot A[k] if A were sorted.
    [Show full text]
  • SI 335, Unit 7: Advanced Sort and Search
    SI 335, Unit 7: Advanced Sort and Search Daniel S. Roche ([email protected]) Spring 2016 Sorting’s back! After starting out with the sorting-related problem of computing medians and percentiles, in this unit we will see two of the most practically-useful sorting algorithms: quickSort and radixSort. Along the way, we’ll get more practice with divide-and-conquer algorithms and computational models, and we’ll touch on how random numbers can play a role in making algorithms faster too. 1 The Selection Problem 1.1 Medians and Percentiles Computing statistical quantities on a given set of data is a really important part of what computers get used for. Of course we all know how to calculate the mean, or average of a list of numbers: sum them all up, and divide by the length of the list. This takes linear-time in the length of the input list. But averages don’t tell the whole story. For example, the average household income in the United States for 2010 was about $68,000 but the median was closer to $50,000. Somehow the second number is a better measure of what a “typical” household might bring in, whereas the first can be unduly affected by the very rich and the very poor. Medians are in fact just a special case of a percentile, another important statistical quantity. Your SAT scores were reported both as a raw number and as a percentile, indicating how you did compared to everyone else who took the test. So if 100,000 people took the SATs, and you scored in the 90th percentile, then your score was higher than 90,000 other students’ scores.
    [Show full text]
  • Practical Massively Parallel Sorting
    Practical Massively Parallel Sorting Michael Axtmann Timo Bingmann Peter Sanders Karlsruhe Inst. of Technology Karlsruhe Inst. of Technology Karlsruhe Inst. of Technology Karlsruhe, Germany Karlsruhe, Germany Karlsruhe, Germany [email protected] [email protected] [email protected] Christian Schulz Karlsruhe Inst. of Technology Karlsruhe, Germany [email protected] ABSTRACT for huge inputs but as fast as possible down to the range where near Previous parallel sorting algorithms do not scale to the largest avail- linear speedup is out of the question. We study the problem of sorting n elements evenly distributed able machines, since they either have prohibitive communication 1 volume or prohibitive critical path length. We describe algorithms over p processing elements (PEs) numbered 1::p. The output re- that are a viable compromise and overcome this gap both in theory quirement is that the PEs store a permutation of the input elements and practice. The algorithms are multi-level generalizations of the such that the elements on each PE are sorted and such that no ele- known algorithms sample sort and multiway mergesort. In partic- ment on PE i is larger than any elements on PE i + 1. ular our sample sort variant turns out to be very scalable. Some There is a gap between the theory and practice of parallel sorting tools we develop may be of independent interest – a simple, practi- algorithms. Between the 1960s and the early 1990s there has been cal, and flexible sorting algorithm for small inputs working in log- intensive work on achieving asymptotically fast and efficient paral- arithmic time, a near linear time optimal algorithm for solving a lel sorting algorithms.
    [Show full text]