Markov Blanket Feature Selection for Support Vector Machines

Total Page:16

File Type:pdf, Size:1020Kb

Markov Blanket Feature Selection for Support Vector Machines Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (2008) Markov Blanket Feature Selection for Support Vector Machines Jianqiang Shen and Lida Li and Weng-Keen Wong 1148 Kelley Engineering Center, School of EECS Oregon State University, Corvallis, OR 97331, U.S.A. {shenj,lili,wong}@eecs.oregonstate.edu Abstract Bai et al. 2004). They all depend on some independence tests. In this paper, we formally analyze the performance of Based on Information Theory, optimal feature selection these test-based algorithms. should be carried out by searching Markov blankets. In this paper, we formally analyze the current Markov blanket dis- We propose to discover Markov blankets by first perform- covery approach for support vector machines and propose ing a fast heuristic Bayesian network structure learning step. to discover Markov blankets by performing a fast heuris- Bayesian networks exploit conditional independence to pro- tic Bayesian network structure learning. We give a suffi- duce a succinct representation of the joint probability distri- cient condition that our approach will improve the perfor- bution between a set of variables. Heuristic structure learn- mance. Two major factors that make it prohibitive for learn- ing first defines a score that describes the fitness of each pos- ing Bayesian networks from high-dimensional data sets are sible structure to the observed data, and then searches for the large search space and the expensive cycle detection op- a structure that maximizes the score. We analyze the suffi- erations. We propose to restrict the search space by only cient condition that our approach improves the performance. considering the promising candidates and detect cycles using an online topological sorting method. Experimental results Rather than learning the Bayesian network, we could try to show that we can efficiently reduce the feature dimensional- learn only the Markov blanket for the class variable. How- ity while preserving a high degree of classification accuracy. ever, as will be shown in our results, we observe much better accuracy when we first learn a Bayesian network because the Bayesian network captures a more accurate global view of Introduction how all the random variables interact with each other. Sec- Many machine learning problems require learning from ondly, learning a network gives us more flexibility, and we high-dimensional data sets in which the number of instances can consider any node as the “class variable” once we have is less than the number of features. A common strategy is to the Bayesian network. Finally, learning a network can still first run a feature selection step before applying a traditional be quite efficient compared with only learning Markov blan- learning algorithm. The main reasons for applying feature kets. Our method can learn a reasonably accurate Bayesian selection is to avoid overfitting by removing irrelevant fea- network from large text datasets within minutes. tures and to improve computational efficiency by decreasing Throughout this paper, we use linear support vector ma- the number of features the learning algorithm needs to con- chines (SVMs) to perform classification after feature selec- sider. Many common-used feature selection methods such tion, since SVM is one of the state-of-the-art learning meth- as mutual information and information gain (Yang & Ped- ods and has met success in numerous domains. This can ersen 1997) are based on pairwise statistics. Although pair- be considered as a hybrid generative-discriminative learn- wise statistics can be computed quickly, they are unable to ing approach (Raina et al. 2004). Although discriminative capture higher order interactions and can be quite unreliable classifiers usually achieve higher prediction accuracy than when the training data is sparse. Also, the user usually has generative classifiers, it has been shown that a generative to tune a threshold to determine the optimal feature set. This classifier outperforms its discriminative counterpart when paper presents an efficient and reliable feature selection ap- the amount of training data is limited (Ng & Jordan 2002). proach without the need for such threshold. In this paper we essentially apply the generative learner to Using Markov blankets for feature selection has been identify which information is necessary for training the dis- shown to be effective (Koller & Sahami 1996). The Markov criminative learner. Our hybrid approach significantly out- blanket is defined as the set of variables M that ren- performs the pure SVM when the training size is small. ders a variable Xi conditionally independent of other vari- ables given the variables in M. Feature selection algo- Searching Markov Blanket Features rithms based on Markov blankets have attracted much at- Most common-used feature selection methods (such as mu- tention recently (Aliferis, Tsamardinos, & Statnikov 2003; tual information, information gain, Chi square (Yang & Ped- Copyright c 2008, Association for the Advancement of Artificial ersen 1997)) are based on pairwise statistics which is unreli- Intelligence (www.aaai.org). All rights reserved. able when the training size is small. It is also difficult to de- 696 termine the optimal feature set size and the user has to tune Thus, T will mistakenly treat two independent variables some threshold. Furthermore, pairwise statistics have diffi- as dependent with probability no less than α. In practice, culty in removing redundant features and capturing higher- usually P > 0.5 and people set α to 5% to control the type I order interactions. Consider variables x and y. It is possible error. With small α, it is very likely that T treats dependent that they will be both retained if they are highly correlated variable as independent, which is shown in the following: (and thus have close statistical measurements). It is also pos- Lemma 2 Given nominal level α, if we want the probability sible that they will be both removed if the class variable’s that T identifies two dependent variables as independent to interaction with x or y alone is not significant, although the be no more than ǫ, then we need n > 1 χ2 1 U 2 co-occurrence of x and y is very predictive of the class. ∆ (q 1,α + 4 ǫ + 2 The Markov blanket criterion can address the above prob- Uǫ) samples, where ∆ is a constant depending on the cor- lems. The Markov blanket subsumes the information that a relation between the variables, and Uǫ is the upper ǫ per- variable has about all of the other features. Koller and Sa- centage point of the cumulative normal distribution. hami (1996) suggested that we should remove features for Proof (Sketch) We want to ensure type II error P (χ2 ≤ 2 2 which we find a Markov blanket within the set of remaining χ1,α) ≤ ǫ. Under the alternative hypothesis, χe fol- features. Since finding Markov blankets is hard, they pro- lows the non-central chi-square distribution with 1e degree posed an approximate algorithm, which is suboptimal and of freedom and non-centrality parameter λ = n · ∆ (Meng very expensive due to its simple approximation (Baker & & Chapman 1966). Applying Pearson transformation and McCallum 1998). There are several other algorithms based Fisher approximation (Johnson & Kotz 1970), we get n > on Markov blankets (Aliferis, Tsamardinos, & Statnikov 1 ( χ2 + 1 U 2 + U )2 after some operations. 2003; Bai et al. 2004). They all depend on independence ∆ q 1,α 4 ǫ ǫ 2 tests and are sensitive to test failures. Furthermore, the user Note χ1,α and Uǫ grow extremely fast as α → 0 and ǫ → still needs to tune the threshold to determine the optimal fea- 0. A test-based feature selection algorithm usually involves ture set size. We analyze their error bounds and propose to Ω(m) independence tests given m variables. Because ∆ is select features by heuristically searching Bayesian networks. usually very small, the probability that T treats dependent variables as independent is high with limited samples. Thus, Analysis of Test-based Algorithms test-based feature selection algorithms are too aggressive in For simplicity, let’s concentrate on binary, complete data. removing features, as also shown in the experiments. We assume that the chi-square test is used for independence test and a linear SVM is used for classification. The results Method Based on Bayesian Network Learning can be easily generalized to other situations. We showed that test-based algorithms are very sensitive to Regarding variables A and B, independence test T is car- α and even with infinite samples their error rates are still ried out by testing null hypothesis H0 : PAB = PA · PB. proportional to α. We propose to search Markov blan- Given the nominal level α, it rejects H0 if the chi-square kets through learning a Bayesian network with some good 2 2 2 statistic χ > χ1,α, where χ1,α is the upper α percent- heuristic strategy. Bayesian networks provide a descrip- tive summary of relationships between variables and an ef- age pointe of the cumulative chi-square distribution with 1 degree of freedom. Let Sn denote a training set with n ficient representation of the joint probability distribution. In samples generated from probability distribution D, F(Sn) this paper, we use Bayesian Information Criterion (BIC) as denote the feature set returned by applying feature selec- the search score. We choose BIC because it can penalize tion algorithm F to Sn. We denote the expected error the model complexity and lead to smaller feature sets. If rate of F given n samples as Errn(F). There could be the goal is to preserve the prediction accuracy as much as many different definitions of Errn(F), and one possibility we can, then a Bayesian score such as BDeu(Heckerman, n n n |Φ⊗F(S )| n Geiger, & Chickering 1995) might be preferred. is Err (F) = n P (S ) n dS , where Φ is the RS ∈D |FΦ∪ (S )| ˆ optimal feature set, U V is the difference between set U Given the class variable C, let M be its minimal Markov ⊗ ˆ and V .
Recommended publications
  • Overview Parallel Merge Sort
    CME 323: Distributed Algorithms and Optimization, Spring 2015 http://stanford.edu/~rezab/dao. Instructor: Reza Zadeh, Matriod and Stanford. Lecture 4, 4/6/2016. Scribed by Henry Neeb, Christopher Kurrus, Andreas Santucci. Overview Today we will continue covering divide and conquer algorithms. We will generalize divide and conquer algorithms and write down a general recipe for it. What's nice about these algorithms is that they are timeless; regardless of whether Spark or any other distributed platform ends up winning out in the next decade, these algorithms always provide a theoretical foundation for which we can build on. It's well worth our understanding. • Parallel merge sort • General recipe for divide and conquer algorithms • Parallel selection • Parallel quick sort (introduction only) Parallel selection involves scanning an array for the kth largest element in linear time. We then take the core idea used in that algorithm and apply it to quick-sort. Parallel Merge Sort Recall the merge sort from the prior lecture. This algorithm sorts a list recursively by dividing the list into smaller pieces, sorting the smaller pieces during reassembly of the list. The algorithm is as follows: Algorithm 1: MergeSort(A) Input : Array A of length n Output: Sorted A 1 if n is 1 then 2 return A 3 end 4 else n 5 L mergeSort(A[0, ..., 2 )) n 6 R mergeSort(A[ 2 , ..., n]) 7 return Merge(L, R) 8 end 1 Last lecture, we described one way where we can take our traditional merge operation and translate it into a parallelMerge routine with work O(n log n) and depth O(log n).
    [Show full text]
  • Lecture 16: Lower Bounds for Sorting
    Lecture Notes CMSC 251 To bound this, recall the integration formula for bounding summations (which we paraphrase here). For any monotonically increasing function f(x) Z Xb−1 b f(i) ≤ f(x)dx: i=a a The function f(x)=xln x is monotonically increasing, and so we have Z n S(n) ≤ x ln xdx: 2 If you are a calculus macho man, then you can integrate this by parts, and if you are a calculus wimp (like me) then you can look it up in a book of integrals Z n 2 2 n 2 2 2 2 x x n n n n x ln xdx = ln x − = ln n − − (2 ln 2 − 1) ≤ ln n − : 2 2 4 x=2 2 4 2 4 This completes the summation bound, and hence the entire proof. Summary: So even though the worst-case running time of QuickSort is Θ(n2), the average-case running time is Θ(n log n). Although we did not show it, it turns out that this doesn’t just happen much of the time. For large values of n, the running time is Θ(n log n) with high probability. In order to get Θ(n2) time the algorithm must make poor choices for the pivot at virtually every step. Poor choices are rare, and so continuously making poor choices are very rare. You might ask, could we make QuickSort deterministic Θ(n log n) by calling the selection algorithm to use the median as the pivot. The answer is that this would work, but the resulting algorithm would be so slow practically that no one would ever use it.
    [Show full text]
  • Advanced Topics in Sorting
    Advanced Topics in Sorting complexity system sorts duplicate keys comparators 1 complexity system sorts duplicate keys comparators 2 Complexity of sorting Computational complexity. Framework to study efficiency of algorithms for solving a particular problem X. Machine model. Focus on fundamental operations. Upper bound. Cost guarantee provided by some algorithm for X. Lower bound. Proven limit on cost guarantee of any algorithm for X. Optimal algorithm. Algorithm with best cost guarantee for X. lower bound ~ upper bound Example: sorting. • Machine model = # comparisons access information only through compares • Upper bound = N lg N from mergesort. • Lower bound ? 3 Decision Tree a < b yes no code between comparisons (e.g., sequence of exchanges) b < c a < c yes no yes no a b c b a c a < c b < c yes no yes no a c b c a b b c a c b a 4 Comparison-based lower bound for sorting Theorem. Any comparison based sorting algorithm must use more than N lg N - 1.44 N comparisons in the worst-case. Pf. Assume input consists of N distinct values a through a . • 1 N • Worst case dictated by tree height h. N ! different orderings. • • (At least) one leaf corresponds to each ordering. Binary tree with N ! leaves cannot have height less than lg (N!) • h lg N! lg (N / e) N Stirling's formula = N lg N - N lg e N lg N - 1.44 N 5 Complexity of sorting Upper bound. Cost guarantee provided by some algorithm for X. Lower bound. Proven limit on cost guarantee of any algorithm for X.
    [Show full text]
  • Optimal Node Selection Algorithm for Parallel Access in Overlay Networks
    1 Optimal Node Selection Algorithm for Parallel Access in Overlay Networks Seung Chul Han and Ye Xia Computer and Information Science and Engineering Department University of Florida 301 CSE Building, PO Box 116120 Gainesville, FL 32611-6120 Email: {schan, yx1}@cise.ufl.edu Abstract In this paper, we investigate the issue of node selection for parallel access in overlay networks, which is a fundamental problem in nearly all recent content distribution systems, grid computing or other peer-to-peer applications. To achieve high performance and resilience to failures, a client can make connections with multiple servers simultaneously and receive different portions of the data from the servers in parallel. However, selecting the best set of servers from the set of all candidate nodes is not a straightforward task, and the obtained performance can vary dramatically depending on the selection result. In this paper, we present a node selection scheme in a hypercube-like overlay network that generates the optimal server set with respect to the worst-case link stress (WLS) criterion. The algorithm allows scaling to very large system because it is very efficient and does not require network measurement or collection of topology or routing information. It has performance advantages in a number of areas, particularly against the random selection scheme. First, it minimizes the level of congestion at the bottleneck link. This is equivalent to maximizing the achievable throughput. Second, it consumes less network resources in terms of the total number of links used and the total bandwidth usage. Third, it leads to low average round-trip time to selected servers, hence, allowing nearby nodes to exchange more data, an objective sought by many content distribution systems.
    [Show full text]
  • CS302 Final Exam, December 5, 2016 - James S
    CS302 Final Exam, December 5, 2016 - James S. Plank Question 1 For each of the following algorithms/activities, tell me its running time with big-O notation. Use the answer sheet, and simply circle the correct running time. If n is unspecified, assume the following: If a vector or string is involved, assume that n is the number of elements. If a graph is involved, assume that n is the number of nodes. If the number of edges is not specified, then assume that the graph has O(n2) edges. A: Sorting a vector of uniformly distributed random numbers with bucket sort. B: In a graph with exactly one cycle, determining if a given node is on the cycle, or not on the cycle. C: Determining the connected components of an undirected graph. D: Sorting a vector of uniformly distributed random numbers with insertion sort. E: Finding a minimum spanning tree of a graph using Prim's algorithm. F: Sorting a vector of uniformly distributed random numbers with quicksort (average case). G: Calculating Fib(n) using dynamic programming. H: Performing a topological sort on a directed acyclic graph. I: Finding a minimum spanning tree of a graph using Kruskal's algorithm. J: Finding the minimum cut of a graph, after you have found the network flow. K: Finding the first augmenting path in the Edmonds Karp implementation of network flow. L: Processing the residual graph in the Ford Fulkerson algorithm, once you have found an augmenting path. Question 2 Please answer the following statements as true or false. A: Kruskal's algorithm requires a starting node.
    [Show full text]
  • Dualheap Sort Algorithm: an Inherently Parallel Generalization of Heapsort
    Dualheap Sort Algorithm: An Inherently Parallel Generalization of Heapsort Greg Sepesi Eduneer, LLC [email protected] 2007 June 20 ABSTRACT The proposed dualheap sort algorithm employs the dualheap selection algorithm [3] to recursively partition A generalization of the heapsort algorithm is proposed. subheaps in half as shown in Figure 1-1. When the At the expense of about 50% more comparison and subheaps become too small to partition any more, the move operations for typical cases, the dualheap sort array is sorted. In the figure, the downward triangles algorithm offers several advantages over heapsort: represent the partitions with larger values, the upward improved cache performance, better performance if triangles represent the partitions with smaller values, the input happens to be already sorted, and easier and the arrow represents the direction of increasing parallel implementations. values and increasing heap indices, indicating that the subheaps with smaller values employ negative indices. 1. ALGORITHM OVERVIEW At the expense of about 50% more comparison and A heap is an array with elements regarded as nodes in move operations for the typical case, the dualheap sort algorithm offers significant improvements to heapsort’s a complete binary tree, where node j is the parent of small but important set of inefficiencies: nodes 2j and 2j+1, and where the value of each parent node is superior to the values of its children. This superiority of all the parent nodes is commonly called • When the input is already sorted, dualheap the heap condition. sort performs zero move operations and just NlogN comparison operations. • Heapsort is an elegant and popular sorting algorithm The dualheap sort has better cache by J.W.J.
    [Show full text]
  • Divide and Conquer CISC4080, Computer Algorithms CIS, Fordham Univ
    Divide and Conquer CISC4080, Computer Algorithms CIS, Fordham Univ. ! Instructor: X. Zhang Acknowledgement • The set of slides have use materials from the following resources • Slides for textbook by Dr. Y. Chen from Shanghai Jiaotong Univ. • Slides from Dr. M. Nicolescu from UNR • Slides sets by Dr. K. Wayne from Princeton • which in turn have borrowed materials from other resources • Other online resources 2 Outline • Sorting problems and algorithms • Divide-and-conquer paradigm • Merge sort algorithm • Master Theorem • recursion tree • Median and Selection Problem • randomized algorithms • Quicksort algorithm • Lower bound of comparison based sorting 3 Sorting Problem • Problem: Given a list of n elements from a totally-ordered universe, rearrange them in ascending order. CS 477/677 - Lecture 1 4 Sorting applications • Straightforward applications: • organize an MP3 library • Display Google PageRank results • List RSS news items in reverse chronological order • Some problems become easier once elements are sorted • identify statistical outlier • binary search • remove duplicates • Less-obvious applications • convex hull • closest pair of points • interval scheduling/partitioning • minimum spanning tree algorithms • … 5 Classification of Sorting Algorithms • Use what operations? • Comparison based sorting: bubble sort, Selection sort, Insertion sort, Mergesort, Quicksort, Heapsort, … • Non-comparison based sort: counting sort, radix sort, bucket sort • Memory (space) requirement: • in place: require O(1), O(log n) memory • out of place:
    [Show full text]
  • 00 Fast K-Selection Algorithms for Graphics Processing Units
    00 Fast K-selection Algorithms for Graphics Processing Units TOLU ALABI, JEFFREY D. BLANCHARD, BRADLEY GORDON, and RUSSEL STEINBACH, Grinnell College Finding the kth largest value in a list of n values is a well-studied problem for which many algorithms have been proposed. A naive approach is to sort the list and then simply select the kth term in the sorted list. However, when the sorted list is not needed, this method has done quite a bit of unnecessary work. Although sorting can be accomplished efficiently when working with a graphics processing unit (GPU), this article proposes two GPU algorithms, radixSelect and bucketSelect, which are several times faster than sorting the vector. As the problem size grows so does the time savings of these algorithms with a sixfold speedup for float vectors larger than 224 and for double vectors larger than 220, ultimately reaching a 19.1 times speed-up for double vectors of length 228. Categories and Subject Descriptors: D.1.3 [Concurrent Programming]: Parallel programming;; F.2.2 [Non-numerical Algorithms and Problems]: Sorting and searching;; G.4 [Mathematical Software]: Parallel and vector implementations;; I.3.1 [Hardware Architecture]: Graphics Processors General Terms: Algorithms, Design, Experimentation, Performance Additional Key Words and Phrases: K-selection, Order Statistics, Multi-core, Graphics Processing Units, GPGPU, CUDA 1. INTRODUCTION The k-selection problem, a well studied problem in computer science, asks one to find the kth largest value in a list of n elements. This problem is often referred to as finding the kth order statistic. This task appears in numerous applications and our motivating example is a thresholding operation, where only the k largest values in a list are retained while the remaining entries are set to zero.
    [Show full text]
  • Introspective Sorting and Selection Algorithms
    Introsp ective Sorting and Selection Algorithms David R Musser Computer Science Department Rensselaer Polytechnic Institute Troy NY mussercsrpiedu Abstract Quicksort is the preferred inplace sorting algorithm in manycontexts since its average computing time on uniformly distributed inputs is N log N and it is in fact faster than most other sorting algorithms on most inputs Its drawback is that its worstcase time bound is N Previous attempts to protect against the worst case by improving the way quicksort cho oses pivot elements for partitioning have increased the average computing time to o muchone might as well use heapsort which has aN log N worstcase time b ound but is on the average to times slower than quicksort A similar dilemma exists with selection algorithms for nding the ith largest element based on partitioning This pap er describ es a simple solution to this dilemma limit the depth of partitioning and for subproblems that exceed the limit switch to another algorithm with a b etter worstcase bound Using heapsort as the stopp er yields a sorting algorithm that is just as fast as quicksort in the average case but also has an N log N worst case time bound For selection a hybrid of Hoares find algorithm which is linear on average but quadratic in the worst case and the BlumFloydPrattRivestTarjan algorithm is as fast as Hoares algorithm in practice yet has a linear worstcase time b ound Also discussed are issues of implementing the new algorithms as generic algorithms and accurately measuring their p erformance in the framework
    [Show full text]
  • Fast Deterministic Selection
    Fast Deterministic Selection Andrei Alexandrescu The D Language Foundation, Washington, USA Abstract The selection problem, in forms such as finding the median or choosing the k top ranked items in a dataset, is a core task in computing with numerous applications in fields as diverse as statist- ics, databases, Machine Learning, finance, biology, and graphics. The selection algorithm Median of Medians, although a landmark theoretical achievement, is seldom used in practice because it is slower than simple approaches based on sampling. The main contribution of this paper is a fast linear-time deterministic selection algorithm MedianOfNinthers based on a refined definition of MedianOfMedians. A complementary algorithm MedianOfExtrema is also proposed. These algorithms work together to solve the selection problem in guaranteed linear time, faster than state-of-the-art baselines, and without resorting to randomization, heuristics, or fallback approaches for pathological cases. We demonstrate results on uniformly distributed random numbers, typical low-entropy artificial datasets, and real-world data. Measurements are open- sourced alongside the implementation at https://github.com/andralex/MedianOfNinthers. 1998 ACM Subject Classification F.2.2 Nonnumerical Algorithms and Problems Keywords and phrases Selection Problem, Quickselect, Median of Medians, Algorithm Engin- eering, Algorithmic Libraries Digital Object Identifier 10.4230/LIPIcs.SEA.2017.24 1 Introduction The selection problem is widely researched and has numerous applications. Selection is finding the kth smallest element (also known as the kth order statistic): given an array A of length |A| = n, a non-strict order ≤ over elements of A, and an index k, the task is to find the element that would be in slot A[k] if A were sorted.
    [Show full text]
  • SI 335, Unit 7: Advanced Sort and Search
    SI 335, Unit 7: Advanced Sort and Search Daniel S. Roche ([email protected]) Spring 2016 Sorting’s back! After starting out with the sorting-related problem of computing medians and percentiles, in this unit we will see two of the most practically-useful sorting algorithms: quickSort and radixSort. Along the way, we’ll get more practice with divide-and-conquer algorithms and computational models, and we’ll touch on how random numbers can play a role in making algorithms faster too. 1 The Selection Problem 1.1 Medians and Percentiles Computing statistical quantities on a given set of data is a really important part of what computers get used for. Of course we all know how to calculate the mean, or average of a list of numbers: sum them all up, and divide by the length of the list. This takes linear-time in the length of the input list. But averages don’t tell the whole story. For example, the average household income in the United States for 2010 was about $68,000 but the median was closer to $50,000. Somehow the second number is a better measure of what a “typical” household might bring in, whereas the first can be unduly affected by the very rich and the very poor. Medians are in fact just a special case of a percentile, another important statistical quantity. Your SAT scores were reported both as a raw number and as a percentile, indicating how you did compared to everyone else who took the test. So if 100,000 people took the SATs, and you scored in the 90th percentile, then your score was higher than 90,000 other students’ scores.
    [Show full text]
  • Practical Massively Parallel Sorting
    Practical Massively Parallel Sorting Michael Axtmann Timo Bingmann Peter Sanders Karlsruhe Inst. of Technology Karlsruhe Inst. of Technology Karlsruhe Inst. of Technology Karlsruhe, Germany Karlsruhe, Germany Karlsruhe, Germany [email protected] [email protected] [email protected] Christian Schulz Karlsruhe Inst. of Technology Karlsruhe, Germany [email protected] ABSTRACT for huge inputs but as fast as possible down to the range where near Previous parallel sorting algorithms do not scale to the largest avail- linear speedup is out of the question. We study the problem of sorting n elements evenly distributed able machines, since they either have prohibitive communication 1 volume or prohibitive critical path length. We describe algorithms over p processing elements (PEs) numbered 1::p. The output re- that are a viable compromise and overcome this gap both in theory quirement is that the PEs store a permutation of the input elements and practice. The algorithms are multi-level generalizations of the such that the elements on each PE are sorted and such that no ele- known algorithms sample sort and multiway mergesort. In partic- ment on PE i is larger than any elements on PE i + 1. ular our sample sort variant turns out to be very scalable. Some There is a gap between the theory and practice of parallel sorting tools we develop may be of independent interest – a simple, practi- algorithms. Between the 1960s and the early 1990s there has been cal, and flexible sorting algorithm for small inputs working in log- intensive work on achieving asymptotically fast and efficient paral- arithmic time, a near linear time optimal algorithm for solving a lel sorting algorithms.
    [Show full text]