Downloaded on 2017-02-12T13:10:15Z Modular Average Case Analysis: Language Implementation and Extension

Total Page:16

File Type:pdf, Size:1020Kb

Downloaded on 2017-02-12T13:10:15Z Modular Average Case Analysis: Language Implementation and Extension Title Modular average case analysis: Language implementation and extension Author(s) Gao, Ang Publication date 2013 Original citation Gao, A. 2013. Modular average case analysis: Language implementation and extension. PhD Thesis, University College Cork. Type of publication Doctoral thesis Rights © 2013, Ang Gao http://creativecommons.org/licenses/by-nc-nd/3.0/ Embargo information No embargo required Item downloaded http://hdl.handle.net/10468/1089 from Downloaded on 2017-02-12T13:10:15Z Modular Average Case Analysis: Language Implementation And Extension Ang Gao A thesis submitted to the National University of Ireland, Cork for the degree of Doctor of Philosophy Department of Computer Science National University of Ireland, Cork Supervisor: Prof. Michel Schellekens Head of Department: Prof. Barry O’Sullivan April 2013 Contents Contents i List of Figures vi List of Tables ix List of Listings x Declaration xii Dedication xiii Acknowledgements xiv Abstract xvi 1 Introduction 1 1.1 Introduction.............................. 2 1.2 Researchtopics ............................ 4 1.2.1 MOQA language design and interpreter implementation . 4 1.2.2 Parallel Extension of MOQA ................ 5 1.2.3 Exploring MOQA applications . 6 1.3 Researchcontributions . 8 1.4 Dissertationstructure. 9 2 Background 11 2.1 Introduction.............................. 12 i CONTENTS 2.2 Static Average Case Analysis of Algorithms . 12 2.2.1 Static Program Analysis . 12 2.2.1.1 Data Flow Analysis . 12 2.2.1.2 Abstract Interpretation . 13 2.2.1.3 Symbolic Analysis . 14 2.2.2 Time Analysis of Programs . 14 2.2.3 MOQA Approach to Static Average Case Analysis . 15 2.3 A Review of MOQA Theory .................... 16 2.3.1 Timing Compositionality . 16 2.3.1.1 Worst-Case . 17 2.3.1.2 Average-Case . 18 2.3.2 MOQA DataStructures. 18 2.3.2.1 Partial Orders and Labelled Partial Orders . 19 2.3.2.2 RandomStructure/RandomBag . 22 2.3.2.3 Randomness Preserving . 25 2.3.3 MOQA BasicOperations . .. 28 2.3.3.1 MOQA Split . 28 2.3.3.2 MOQA Product.................. 31 2.3.3.3 MOQA Top/Bot ................. 36 2.3.3.4 MOQA Percolation . 37 2.4 Relatedwork ............................. 39 2.5 Summary ............................... 43 3 MOQA Language Design 44 3.1 Introduction.............................. 46 3.2 MOQA LanguageOverview. 46 3.2.1 Motivations and Design Goals . 47 3.2.2 Domain Specific Language . 48 3.2.3 WhyPython ......................... 49 3.2.4 Running MOQA ....................... 50 3.2.4.1 Execution Mode . 51 3.2.4.2 Analysis Mode . 52 3.3 MOQA LanguageSyntax ...................... 54 ii CONTENTS 3.3.1 Lexical Conventions . 55 3.3.1.1 Comments . 55 3.3.1.2 Identifiers . 55 3.3.1.3 Keywords . 55 3.3.1.4 Primitive Types . 56 3.3.2 Labelled Partial Order . 56 3.3.3 MOQA LanguageGrammar . 57 3.3.3.1 Structure of a MOQA Program ......... 58 3.3.3.2 Variable Declarations . 58 3.3.3.3 Function Definition . 59 3.3.3.4 Expression . 60 3.3.4 Scopes............................. 67 3.3.5 MOQA LanguageSyntaxSpecification . 67 3.4 MOQA LanguageTypeSystem. 69 3.4.1 TypeEnvironments. .. 69 3.4.2 Type Checking Rules . 71 3.5 MOQA LanguageSemantics . 78 3.5.1 MOQA ExecutionSemantics . 79 3.5.2 MOQA Analysis Semantics . 88 3.6 MOQA Programming Restrictions . 97 3.7 Summary ............................... 97 4 MOQA Language Implementation 99 4.1 PythonLex-Yacc ........................... 100 4.2 MOQA InterpreterArchitecture . 101 4.3 Lexer.................................. 104 4.4 Parser ................................. 104 4.5 Environment Implementation . 108 4.6 TypeChecker ............................. 109 4.7 ExecutionEngine ........................... 111 4.8 Analysis Engine . 115 4.8.1 Dynamic programming in ACETAnalysis .......... 117 4.9 Summary ............................... 127 iii CONTENTS 5 MOQA Programming Examples and Evaluation 128 5.1 Insertionsort.............................. 129 5.2 Quicksort ............................... 131 5.3 Quickselect .............................. 133 5.4 Mergesort ............................... 136 5.5 TreapGen ............................... 137 5.6 Treapsort ............................... 141 5.7 Heapify ................................ 143 5.8 Summary ............................... 147 6 Parallel Extension of MOQA 148 6.1 Introduction.............................. 149 6.2 RelatedWork ............................. 150 6.3 Overview of Multithreaded Program Analysis . 151 6.4 MOQA TheoryExtension. .. 153 6.5 Experimentation and Evaluating a Practical Example . 155 6.5.1 Experiment Result . 157 6.6 Summary ............................... 160 7 Applications of MOQA 162 7.1 RandomBagPreservation forHeap Algorithms . 163 7.1.1 BinaryHeap.......................... 164 7.1.2 LeonardoHeap ........................ 165 7.1.3 SkewHeap........................... 169 7.1.4 Min-Max Heap . 172 7.2 Complexity and Entropy based on MOQA ............ 176 7.2.1 Average Case Analysis of Treap Insertion . 176 7.2.2 Entropy Analysis based on Random Bags . 180 7.2.3 Tracking Entropy Changes in Sorting Algorithms . 183 7.3 Reversible Computing for MOQA ................. 189 7.3.1 Background .......................... 189 7.3.2 Reversible Split Operation . 190 7.3.3 Reversible Quicksort . 196 iv CONTENTS 7.4 Smoothed Analysis for MOQA ................... 200 7.5 Summary ............................... 204 8 Conclusions and Future Work 207 8.1 Conclusions .............................. 208 8.2 FutureWork.............................. 210 8.2.1 From An Application Perspective . 210 8.2.2 From A Theoretical Perspective . 212 A MOQA Language 213 A1 MOQA LanguageSyntaxSpecification. 214 A2 MOQA LanguageLexer....................... 216 A3 MOQA LanguageParser ...................... 219 B 227 B1 Approximation Timing for the Merge Operation . 228 References 230 v List of Figures 2.1 An example of a Hasse diagram with node set X = {a, b, c, d, e} . 20 2.2 AnexampleofSP-order ....................... 21 2.3 An example of LPO with labelling function F = {a 7→ 5, b 7→ 3,c 7→ 4,d 7→ 2, e 7→ 1} ........................ 22 2.4 An example of order-isomorphic data-labellings with the label set L = {1, 2, 3} .............................. 23 2.5 An example of random structure with the label set L = {1, 2, 3, 4, 5} 24 2.6 A size four heap random structure with label set L = {1, 2, 3, 4} . 26 2.7 Resulting size three heap random structure . 26 2.8 Isolatedsubsetsexample . 27 2.9 MOQA Split over a LPO....................... 29 2.10 MOQA Split over a discrete random structure R(∆3) with label set L = {1, 2, 3} ............................ 29 2.11 Split on list of size n ......................... 31 2.12 Productoftwopartialorders . 32 2.13 Illustration of steps involved in executing the MOQA Product operation ............................... 33 2.14 An example of the MOQA Top operation ............. 37 2.15 An example of the MOQA PercM operation . 38 3.1 sample.moqaoutput1 ........................ 52 3.2 sample.moqaoutput2 ........................ 53 3.3 sample.moqaACEToutput . 54 3.4 MOQA LanguageTypeSystem. 58 vi LIST OF FIGURES 4.1 MOQA InterpreterArchitecture . 102 4.2 MOQA abstractsyntax treeclass hierarchy . 105 4.3 Abstract syntax tree for MOQA Quicksort ............ 106 4.4 Interpreter analysis engine Flowchart . 116 4.5 MOQA analysis library class diagram . 122 4.6 Series SPPO creation example . 124 5.1 MOQA Treapcreationexample. 139 5.2 Treapgen algorithm maps R(∆3) to random bag T REAP(3) . 140 5.3 Treapsort example on a four element treap. 142 5.4 Heapifyexampletocreateafournodeheap. 144 6.1 A dag representation of a multithreaded program execution. Each vertex is a strand, edges represent strand dependencies. .. 152 6.2 A comparison of average time execution between different core amounts and how ACEST and ACEWT bounds the times. Green represents the sequential execution time. 159 6.3 A comparison of the obtained speedup differences for core amounts compared to sequential execution and the theoretically obtained values ................................. 160 7.1 Leonardo tree examples: Lt0, Lt1, Lt2, Lt3 .............. 167 7.2 EightnodesLeonardoHeapexample . 167 7.3 Leonardo Heap creation example with input List L = [3, 1, 4, 2] . 169 7.4 Leonardo Heap creation: random preserving contour example . .. 169 7.5 Skew Heap creation example: length three random list . 172 7.6 Min-Max heap built from input list [3, 5, 1, 2, 6, 4] ......... 173 7.7 MOQA Min-Max heapify on input list [3,5,1,2,6,4] . 174 7.8 One possible partial order generated by Min-Max heapify ..... 175 7.9 Result of random insertion on MOQA T REAP(3)........ 178 7.10 Bottom-up Mergesort example: first five merge operations . .. 185 7.11 Mergesort recursion tree on list size eight . 186 7.12 Forward and reverse split on random list [x,f,m,g,q,s,b,t,p,z]. 195 vii LIST OF FIGURES 7.13 MOQA Smoothed Analysis of Quicksort with input list size 50 on differentdegreesofperturbation . 205 viii List of Tables 2.1 worst-case IO-compositionality counter example . 17 3.1 MOQA language LPO operations ................. 57 5.1 Insertionsort: comparing the theoretical result with the interpreter result.................................. 131 5.2 Quicksort: comparing the theoretical result with the interpreter result.................................. 133 5.3 Quickselect: comparing the theoretical result with the interpreter result.................................. 136 5.4 Mergesort: comparing the theoretical result with the interpreter result.................................. 138 5.5 Treapsort: comparing the theoretical result
Recommended publications
  • Edsger W. Dijkstra: a Commemoration
    Edsger W. Dijkstra: a Commemoration Krzysztof R. Apt1 and Tony Hoare2 (editors) 1 CWI, Amsterdam, The Netherlands and MIMUW, University of Warsaw, Poland 2 Department of Computer Science and Technology, University of Cambridge and Microsoft Research Ltd, Cambridge, UK Abstract This article is a multiauthored portrait of Edsger Wybe Dijkstra that consists of testimo- nials written by several friends, colleagues, and students of his. It provides unique insights into his personality, working style and habits, and his influence on other computer scientists, as a researcher, teacher, and mentor. Contents Preface 3 Tony Hoare 4 Donald Knuth 9 Christian Lengauer 11 K. Mani Chandy 13 Eric C.R. Hehner 15 Mark Scheevel 17 Krzysztof R. Apt 18 arXiv:2104.03392v1 [cs.GL] 7 Apr 2021 Niklaus Wirth 20 Lex Bijlsma 23 Manfred Broy 24 David Gries 26 Ted Herman 28 Alain J. Martin 29 J Strother Moore 31 Vladimir Lifschitz 33 Wim H. Hesselink 34 1 Hamilton Richards 36 Ken Calvert 38 David Naumann 40 David Turner 42 J.R. Rao 44 Jayadev Misra 47 Rajeev Joshi 50 Maarten van Emden 52 Two Tuesday Afternoon Clubs 54 2 Preface Edsger Dijkstra was perhaps the best known, and certainly the most discussed, computer scientist of the seventies and eighties. We both knew Dijkstra |though each of us in different ways| and we both were aware that his influence on computer science was not limited to his pioneering software projects and research articles. He interacted with his colleagues by way of numerous discussions, extensive letter correspondence, and hundreds of so-called EWD reports that he used to send to a select group of researchers.
    [Show full text]
  • Sorting Algorithm 1 Sorting Algorithm
    Sorting algorithm 1 Sorting algorithm In computer science, a sorting algorithm is an algorithm that puts elements of a list in a certain order. The most-used orders are numerical order and lexicographical order. Efficient sorting is important for optimizing the use of other algorithms (such as search and merge algorithms) that require sorted lists to work correctly; it is also often useful for canonicalizing data and for producing human-readable output. More formally, the output must satisfy two conditions: 1. The output is in nondecreasing order (each element is no smaller than the previous element according to the desired total order); 2. The output is a permutation, or reordering, of the input. Since the dawn of computing, the sorting problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently despite its simple, familiar statement. For example, bubble sort was analyzed as early as 1956.[1] Although many consider it a solved problem, useful new sorting algorithms are still being invented (for example, library sort was first published in 2004). Sorting algorithms are prevalent in introductory computer science classes, where the abundance of algorithms for the problem provides a gentle introduction to a variety of core algorithm concepts, such as big O notation, divide and conquer algorithms, data structures, randomized algorithms, best, worst and average case analysis, time-space tradeoffs, and lower bounds. Classification Sorting algorithms used in computer science are often classified by: • Computational complexity (worst, average and best behaviour) of element comparisons in terms of the size of the list . For typical sorting algorithms good behavior is and bad behavior is .
    [Show full text]
  • Download Distributed Systems Free Ebook
    DISTRIBUTED SYSTEMS DOWNLOAD FREE BOOK Maarten van Steen, Andrew S Tanenbaum | 596 pages | 01 Feb 2017 | Createspace Independent Publishing Platform | 9781543057386 | English | United States Distributed Systems - The Complete Guide The hope is that together, the system can maximize resources and information while preventing failures, as if one system fails, it won't affect the availability of the service. Banker's algorithm Dijkstra's algorithm DJP algorithm Prim's algorithm Dijkstra-Scholten algorithm Dekker's algorithm generalization Smoothsort Shunting-yard algorithm Distributed Systems marking algorithm Concurrent algorithms Distributed Systems algorithms Deadlock prevention algorithms Mutual exclusion algorithms Self-stabilizing Distributed Systems. Learn to code for free. For the first time computers would be able to send messages to other systems with a local IP address. The messages passed between machines contain forms of data that the systems want to share like databases, objects, and Distributed Systems. Also known as distributed computing and distributed databases, a distributed system is a collection of independent components located on different machines that share messages with each other in order to achieve common goals. To prevent infinite loops, running the code requires some amount of Ether. As mentioned in many places, one of which this great articleyou cannot have consistency and availability without partition tolerance. Because it works in batches jobs a problem arises where if your job fails — Distributed Systems need to restart the whole thing. While in a voting system an attacker need only add nodes to the network which is Distributed Systems, as free access to the network is a design targetin a CPU power based scheme an attacker faces a physical limitation: getting access to more and more powerful hardware.
    [Show full text]
  • Sorting Algorithm 1 Sorting Algorithm
    Sorting algorithm 1 Sorting algorithm A sorting algorithm is an algorithm that puts elements of a list in a certain order. The most-used orders are numerical order and lexicographical order. Efficient sorting is important for optimizing the use of other algorithms (such as search and merge algorithms) which require input data to be in sorted lists; it is also often useful for canonicalizing data and for producing human-readable output. More formally, the output must satisfy two conditions: 1. The output is in nondecreasing order (each element is no smaller than the previous element according to the desired total order); 2. The output is a permutation (reordering) of the input. Since the dawn of computing, the sorting problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently despite its simple, familiar statement. For example, bubble sort was analyzed as early as 1956.[1] Although many consider it a solved problem, useful new sorting algorithms are still being invented (for example, library sort was first published in 2006). Sorting algorithms are prevalent in introductory computer science classes, where the abundance of algorithms for the problem provides a gentle introduction to a variety of core algorithm concepts, such as big O notation, divide and conquer algorithms, data structures, randomized algorithms, best, worst and average case analysis, time-space tradeoffs, and upper and lower bounds. Classification Sorting algorithms are often classified by: • Computational complexity (worst, average and best behavior) of element comparisons in terms of the size of the list (n). For typical serial sorting algorithms good behavior is O(n log n), with parallel sort in O(log2 n), and bad behavior is O(n2).
    [Show full text]
  • Comparison Sorts Name Best Average Worst Memory Stable Method Other Notes Quicksort Is Usually Done in Place with O(Log N) Stack Space
    Comparison sorts Name Best Average Worst Memory Stable Method Other notes Quicksort is usually done in place with O(log n) stack space. Most implementations on typical in- are unstable, as stable average, worst place sort in-place partitioning is case is ; is not more complex. Naïve Quicksort Sedgewick stable; Partitioning variants use an O(n) variation is stable space array to store the worst versions partition. Quicksort case exist variant using three-way (fat) partitioning takes O(n) comparisons when sorting an array of equal keys. Highly parallelizable (up to O(log n) using the Three Hungarian's Algorithmor, more Merge sort worst case Yes Merging practically, Cole's parallel merge sort) for processing large amounts of data. Can be implemented as In-place merge sort — — Yes Merging a stable sort based on stable in-place merging. Heapsort No Selection O(n + d) where d is the Insertion sort Yes Insertion number of inversions. Introsort No Partitioning Used in several STL Comparison sorts Name Best Average Worst Memory Stable Method Other notes & Selection implementations. Stable with O(n) extra Selection sort No Selection space, for example using lists. Makes n comparisons Insertion & Timsort Yes when the data is already Merging sorted or reverse sorted. Makes n comparisons Cubesort Yes Insertion when the data is already sorted or reverse sorted. Small code size, no use Depends on gap of call stack, reasonably sequence; fast, useful where Shell sort or best known is No Insertion memory is at a premium such as embedded and older mainframe applications. Bubble sort Yes Exchanging Tiny code size.
    [Show full text]
  • Smoothsort's Behavior on Presorted Sequences by Stefan Hertel
    Smoothsort's Behavior on Presorted Sequences by Stefan Hertel Fachbereich 10 Universitat des Saarlandes 6600 Saarbrlicken West Germany A 82/11 July 1982 Abstract: In [5], Mehlhorn presented an algorithm for sorting nearly sorted sequences of length n in time 0(n(1+log(F/n») where F is the number of initial inver­ sions. More recently, Dijkstra[3] presented a new algorithm for sorting in situ. Without giving much evidence of it, he claims that his algorithm works well on nearly sorted sequences. In this note we show that smoothsort compares unfavorably to Mehlhorn's algorithm. We present a sequence of length n with O(nlogn) inversions which forces smooth­ sort to use time Q(nlogn), contrasting to the time o (nloglogn) Mehlhorn's algorithm would need. - 1 O. Introduction Sorting is a task in the very heart of computer science, and efficient algorithms for it were developed early. Several of them achieve the O(nlogn) lower bound for sor­ ting n elements by comparison that can be found in Knuth[4]. In many applications, however, the lists to be sorted do not consist of randomly distributed elements, they are already partially sorted. Most classic O(nlogn) algorithms - most notably mergesort and heapsort (see [4]) - do not take the presortedness of their inputs into account (cmp. [2]). Therefore, in recent years, the interest in sorting focused on algorithms that exploit the degree of sortedness of the respective input. No generally accepted measure of sortedness of a list has evolved so far. Cook and Kim[2] use the minimum number of elements after the removal of which the remaining portion of the list is sorted - on this basis, they compared five well-known sorting algorithms experimentally.
    [Show full text]
  • {PDF} Architecture Oriented Otherwise Ebook, Epub
    ARCHITECTURE ORIENTED OTHERWISE PDF, EPUB, EBOOK David Leatherbarrow | 304 pages | 06 Jan 2015 | PRINCETON ARCHITECTURAL PRESS | 9781616893026 | English | New York, United States Architecture Oriented Otherwise PDF Book Michael Benedikt has long helped us understand how buildings frame co-visibility and, by implication, the ways in which we interact with others. Review Committee. An important argument of this book is that conception and construction only explain the work's pre-history, not its manner of existing in the world, its ways of variously resisting and allowing the impress or effect of the forces that animate the wider location. Abe Roisman rated it it was amazing May 23, Stafford R is currently reading it Sep 12, We use cookies and other tracking technologies for performance, analytics, marketing, and more customized site experiences. Banker's algorithm Dijkstra's algorithm DJP algorithm Prim's algorithm Dijkstra- Scholten algorithm Dekker's algorithm generalization Smoothsort Shunting-yard algorithm Tri-color marking algorithm Concurrent algorithms Distributed algorithms Deadlock prevention algorithms Mutual exclusion algorithms Self-stabilizing algorithms. As engaging as it is erudite, the text adduces evidence from domains we tend to keep apart—religion, perceptual psychology, and philosophy for example—in order to deepen our understanding of architectures we thought we knew, works by figures like Aldo van Eyck, Louis I. Much of enterprise architecture is about understanding what is worth the costs of central coordination, and what form that coordination should take. As with conceptual integrity, it was Fred Brooks who introduced it to a wider audience when he cited the paper and the idea in his elegant classic The Mythical Man-Month , calling it "Conway's Law.
    [Show full text]
  • A Structured Programming Approach to Data Ebook
    A STRUCTURED PROGRAMMING APPROACH TO DATA PDF, EPUB, EBOOK Coleman | 222 pages | 27 Aug 2012 | Springer-Verlag New York Inc. | 9781468479874 | English | New York, NY, United States A Structured Programming Approach to Data PDF Book File globbing in Linux. It seems that you're in Germany. De Marco's approach [13] consists of the following objects see figure : [12]. Get print book. Bibliographic information. There is no reason to discard it. No Downloads. Programming is becoming a technology, a theory known as structured programming is developing. Visibility Others can see my Clipboard. From Wikipedia, the free encyclopedia. ALGOL 60 implementation Call stack Concurrency Concurrent programming Cooperating sequential processes Critical section Deadly embrace deadlock Dining philosophers problem Dutch national flag problem Fault-tolerant system Goto-less programming Guarded Command Language Layered structure in software architecture Levels of abstraction Multithreaded programming Mutual exclusion mutex Producer—consumer problem bounded buffer problem Program families Predicate transformer semantics Process synchronization Self-stabilizing distributed system Semaphore programming Separation of concerns Sleeping barber problem Software crisis Structured analysis Structured programming THE multiprogramming system Unbounded nondeterminism Weakest precondition calculus. Comments and Discussions. The code block runs at most once. Latest Articles. Show all. How any system is developed can be determined through a data flow diagram. The result of structured analysis is a set of related graphical diagrams, process descriptions, and data definitions. Therefore, when changes are made to that type of data, the corresponding change must be made at each location that acts on that type of data within the program. It means that the program uses single-entry and single-exit elements.
    [Show full text]
  • Parallel Sorting with Minimal Data
    Parallel Sorting with Minimal Data Christian Siebert1;2 and Felix Wolf 1;2;3 1 German Research School for Simulation Sciences, 52062 Aachen, Germany 2 RWTH Aachen University, Computer Science Department, 52056 Aachen, Germany 3 Forschungszentrum J¨ulich, J¨ulich Supercomputing Centre, 52425 J¨ulich, Germany fc.siebert,[email protected] Abstract. For reasons of efficiency, parallel methods are normally used to work with as many elements as possible. Contrary to this preferred situation, some applications need the opposite. This paper presents three parallel sorting algorithms suited for the extreme case where every pro- cess contributes only a single element. Scalable solutions for this case are needed for the communicator constructor MPI Comm split. Compared to previous approaches requiring O(p) memory, we introduce two new par- allel sorting algorithms working with a minimum of O(1) memory. One method is simple to implement and achieves a running time of O(p). Our scalable algorithm solves this sorting problem in O(log2 p) time. Keywords: MPI, Scalability, Sorting, Algorithms, Limited memory 1 Introduction Sorting is often considered to be the most fundamental problem in computer science. Since the 1960s, computer manufacturers estimate that more than 25 percent of the processor time is spent on sorting [6, p. 3]. Many applications use sorting algorithms as a key subroutine either because they inherently need to sort some information, or because sorting is a prerequisite for efficiently solving other problems such as searching or matching. Formally, the sequential sorting problem can be defined as follows:1 Input: A sequence of n items (x1; x2; : : : ; xn), and a relational operator ≤ that specifies an order on these items.
    [Show full text]
  • Exploiting Partial Order with Quicksort
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Research Online University of Wollongong Research Online Department of Computing Science Working Faculty of Engineering and Information Paper Series Sciences 1982 Exploiting partial order with quicksort R. Geoff Dromey University of Wollongong Follow this and additional works at: https://ro.uow.edu.au/compsciwp Recommended Citation Dromey, R. Geoff, Exploiting partial order with quicksort, Department of Computing Science, University of Wollongong, Working Paper 82-14, 1982, 13p. https://ro.uow.edu.au/compsciwp/46 Research Online is the open access institutional repository for the University of Wollongong. For further information contact the UOW Library: [email protected] EXPLOITING PARTIAL ORDER WITH QUICKSORT R. Geoff Dromey Department of Computing Science. The University of Wollongong. Post Office Box 1144. Wollongong. N.S.W. 2500 Australia. ABSTRACT The widely known Quicksort algorithm does not attempt to actively take advantage of partial order in sorting data. A relatively simple change can be made to the Quicksort strategy to give a bestcase per­ formance of n. for ordered data. with a smooth transition to O(n log n) for the random data case. An attractive attribute of this new algorithm CTransort> Is that its performance for random data matches that for Sedgewlck's claimed best Implementation of Quicksort. EXPLOITING PARTIAL ORDER WITH QUICKSORT R. Geoff Dromey Department of Computing Science. The University of Wollongong. Post Office Box 1144. Wollongong. N.S.W. 2500 Australia. 1. Introduction The Quicksort algorithm [lJ is an elegant. efficient and widely used internal sort­ ing method.
    [Show full text]
  • Efficiency Analysis
    Isoefficiency analysis • V2 typos fixed in matrix vector multiply Measuring the parallel scalability of algorithms • One of many parallel performance metrics • Allows us to determine scalability with respect to machine parameters • number of processors and their speed • communication patterns, bandwidth and startup • Give us a way of computing • the relative scalability of two algorithms • how much work needs to be increased when the number of processors is increased to maintain the same efficiency Amdahl’s law reviewed As number of processors increase, serial overheads reduce efficiency As problem size increases, efficiency returns Efficiency of adding n numbers on an ancient machine P=4 gives ε of .80 with 64 numbers P=8 gives ε of .80 with 192 numbers P=16 gives ε of .80 with 512 numbers (4X processors, 8X data) Motivating example • Consider a program that does O(n) work • Also assume the overhead is O(log2 p), i.e. it does a reduction • The total overhead, i.e. the amount of time processors are sitting idle or doing work associated with parallelism instead of the basic problem, is O(p log2 p) P0 P1 P2 P3 P4 P5 P6 P7 P0 P2 P4 P6 P0 P4 P0 Naive Allreduce ~1/2 nodes are idle at any given time Data to maintain efficiency Data ● As number of needed processors increase, P P log2 P per serial overheads processor reduce efficiency 2 2 1 GB ● As problem size 4 8 2 GB increases,efficiency returns 8 24 3 GB 16 64 4 Isoefficiency analysis allows us to analyze the rate at which the data size must grow to mask parallel overheads to determine if a computation is scalable.
    [Show full text]
  • A Time-Based Adaptive Hybrid Sorting Algorithm on CPU and GPU with Application to Collision Detection
    Fachbereich 3: Mathematik und Informatik Diploma Thesis A Time-Based Adaptive Hybrid Sorting Algorithm on CPU and GPU with Application to Collision Detection Robin Tenhagen Matriculation No.2288410 5 January 2015 Examiner: Prof. Dr. Gabriel Zachmann Supervisor: Prof. Dr. Frieder Nake Advisor: David Mainzer Robin Tenhagen A Time-Based Adaptive Hybrid Sorting Algorithm on CPU and GPU with Application to Collision Detection Diploma Thesis, Fachbereich 3: Mathematik und Informatik Universität Bremen, January 2015 Diploma ThesisA Time-Based Adaptive Hybrid Sorting Algorithm on CPU and GPU with Application to Collision Detection Selbstständigkeitserklärung Hiermit erkläre ich, dass ich die vorliegende Arbeit selbstständig angefertigt, nicht anderweitig zu Prüfungszwecken vorgelegt und keine anderen als die angegebenen Hilfsmittel verwendet habe. Sämtliche wissentlich verwendete Textausschnitte, Zitate oder Inhalte anderer Verfasser wurden ausdrücklich als solche gekennzeichnet. Bremen, den 5. Januar 2015 Robin Tenhagen 3 Diploma ThesisA Time-Based Adaptive Hybrid Sorting Algorithm on CPU and GPU with Application to Collision Detection Abstract Real world data is often being sorted in a repetitive way. In many applications, only small parts change within a small time frame. This thesis presents the novel approach of time-based adaptive sorting. The proposed algorithm is hybrid; it uses advantages of existing adaptive sorting algorithms. This means, that the algorithm cannot only be used on CPU, but, introducing new fast adaptive GPU algorithms, it delivers a usable adaptive sorting algorithm for GPGPU. As one of practical examples, for collision detection in well-known animation scenes with deformable cloths, especially on CPU the presented algorithm turned out to be faster than the adaptive hybrid algorithm Timsort.
    [Show full text]