Automatic Algorithm Recognition Based on Programming Schemas and Beacons Aalto University

Total Page:16

File Type:pdf, Size:1020Kb

Automatic Algorithm Recognition Based on Programming Schemas and Beacons Aalto University Department of Computer Science and Engineering Aalto- Ahmad Taherkhani Taherkhani Ahmad DD 17 Automatic Algorithm / 2013 2013 Recognition Based on Automatic Algorithm Recognition Based on Programming Schemas and Beacons and Beacons Schemas Programming on Based Recognition Algorithm Automatic Programming Schemas and Beacons A Supervised Machine Learning Classification Approach Ahmad Taherkhani 9HSTFMG*aejijc+ ISBN 978-952-60-4989-2 BUSINESS + ISBN 978-952-60-4990-8 (pdf) ECONOMY ISSN-L 1799-4934 ISSN 1799-4934 ART + ISSN 1799-4942 (pdf) DESIGN + ARCHITECTURE University Aalto Aalto University School of Science SCIENCE + Department of Computer Science and Engineering TECHNOLOGY www.aalto.fi CROSSOVER DOCTORAL DOCTORAL DISSERTATIONS DISSERTATIONS Aalto University publication series DOCTORAL DISSERTATIONS 17/2013 Automatic Algorithm Recognition Based on Programming Schemas and Beacons A Supervised Machine Learning Classification Approach Ahmad Taherkhani A doctoral dissertation completed for the degree of Doctor of Science (Technology) (Doctor of Philosophy) to be defended, with the permission of the Aalto University School of Science, at a public examination held at the lecture hall T2 of the school on the 8th of March 2013 at 12 noon. Aalto University School of Science Department of Computer Science and Engineering Learning + Technology Group Supervising professor Professor Lauri Malmi Thesis advisor D. Sc. (Tech) Ari Korhonen Preliminary examiners Professor Jorma Sajaniemi, University of Eastern Finland, Finland Dr. Colin Johnson, University of Kent, United Kingdom Opponent Professor Tapio Salakoski, University of Turku, Finland Aalto University publication series DOCTORAL DISSERTATIONS 17/2013 © Ahmad Taherkhani ISBN 978-952-60-4989-2 (printed) ISBN 978-952-60-4990-8 (pdf) ISSN-L 1799-4934 ISSN 1799-4934 (printed) ISSN 1799-4942 (pdf) http://urn.fi/URN:ISBN:978-952-60-4990-8 Unigrafia Oy Helsinki 2013 Finland Publication orders (printed book): The dissertation can be read at http://lib.tkk.fi/Diss/ Abstract Aalto University, P.O. Box 11000, FI-00076 Aalto www.aalto.fi Author Ahmad Taherkhani Name of the doctoral dissertation Automatic Algorithm Recognition Based on Programming Schemas and Beacons: A Supervised Machine Learning Classification Approach Publisher Aalto University School of Science Unit Department of Computer Science and Engineering Series Aalto University publication series DOCTORAL DISSERTATIONS 17/2013 Field of research Software Systems Manuscript submitted 11 September 2012 Date of the defence 8 March 2013 Permission to publish granted (date) 18 December 2012 Language English Monograph Article dissertation (summary + original articles) Abstract In this thesis, we present techniques to recognize basic algorithms covered in computer science education from source code. The techniques use various software metrics, language constructs and other characteristics of source code, as well as the concept of schemas and beacons from program comprehension models. Schemas are high level programming knowledge with detailed knowledge abstracted out. Beacons are statements that imply specific structures in a program. Moreover, roles of variables constitute an important part of the techniques. Roles are concepts that describe the behavior and usage of variables in a program. They have originally been introduced to help novices learn programming. We discuss two methods for algorithm recognition. The first one is a classification method based on a supervised machine learning technique. It uses the vectors of characteristics and beacons automatically computed from the algorithm implementations of a training set to learn what characteristics and beacons can best describe each algorithm. Based on these observed instance-class pairs, the system assigns a class to each new input algorithm implementation according to its characteristics and beacons. We use the C4.5 algorithm to generate a decision tree that performs the task. In the second method, the schema detection method, algorithms are defined as schemas that exist in the knowledge base of the system. To identify an algorithm, the method searches the source code to detect schemas that correspond to those predefined schemas. Moreover, we present a method that combines these two methods: it first applies the schema detection method to extract algorithmic schemas from the given program and then proceeds to the classification method applied to the schema parts only. This enhances the reliability of the classification method, as the characteristics and beacons are computed only from the algorithm implementation code, instead of the whole given program. We discuss several empirical studies conducted to evaluate the performance of the methods. Some results are as follows: evaluated by leave-one-out cross-validation, the estimated classification accuracy for sorting algorithms is 98,1%, for searching, heap, basic tree traversal and graph algorithms 97,3% and for the combined method (on sorting algorithms and their variations from real student submissions) 97,0%. For the schema detection method, the accuracy is 88,3% and 94,1%, respectively. In addition, we present a study for categorizing student-implemented sorting algorithms and their variations in order to find problematic solutions that would allow us to give feedback on them. We also explain how these variations can be automatically recognized. Keywords algorithm recognition, schema detection, beacons, roles of variables, program comprehension, automated assessment ISBN (printed) 978-952-60-4989-2 ISBN (pdf) 978-952-60-4990-8 ISSN-L 1799-4934 ISSN (printed) 1799-4934 ISSN (pdf) 1799-4942 Location of publisher Espoo Location of printing Helsinki Year 2013 Pages 254 urn http://urn.fi/URN:ISBN:978-952-60-4990-8 To the memory of my brother and my father-in-law who both were a big part of my life. i ii Preface Back in 2007, after working in industry for several years, I contacted Professor Lauri Malmi, the supervisor of this dissertation, for possibilities on working in his research group to complete my master’s thesis. He hired me to a perfect position which was funded by the Academy of Finland. The project was much larger than a master’s thesis and after completing my thesis, he offered me an opportunity to continue working on it to pursue my doctoral degree. Having always desired to do my PhD and being interested in the topic, I asked for a leave of absence from my work at Accenture and used the opportunity. It was Lauri who made it all possible: he gave me an opportunity to become a part of the computing education community, allowed me to focus almost full-time on my research and provided me an excellent guidance throughout the project. So thank you Lauri, I really appreciate all of these. Doctor Ari Korhonen provided great ideas and insightful suggestions which are reflected throughout this thesis. His friendly discussions, can-do attitude and critical thinking always helped me forward. I thank him also for showing me how to be a good instructor. I thank all the members of our research group, the Learning + Technology Group (LeTech), for providing a friendly environment in which to work and for their comments during our weekly meetings. I especially thank Doctor Jan Lönnberg for his help. I am grateful to the pre-examiners, Professor Jorma Sajaniemi and Doc- tor Colin Johnson, for taking the time to check my thesis and for their valuable observations and suggestions. Sajaniemi also read my Licenti- ate’s thesis and one of my journal publications and provided constructive feedback. It is also an honor to have Professor Tapio Salakoski as my opponent. I would also like to thank the participants of the international confer- iii ences who provided comments and feedback on my presentations. The same goes for the international working group that I had the chance to participate in. I thank my parents for giving me an opportunity for an education and an abiding respect for learning. Last, and most importantly, I would like to thank my kids, Ava and Arad, for being so special and my wife Elmira for her enormous patience and support and continuing encouragement in whatever I decide to do. Helsinki, December 28, 2012, Ahmad Taherkhani iv Contents Preface iii Contents v List of Publications ix Author’s Contribution xi 1. Introduction 1 1.1 Motivation ............................. 1 1.2 Research Questions ........................ 2 1.3 Structure of the Thesis ...................... 5 2. Algorithm Recognition and Related Work 7 2.1 Algorithm Recognition (AR) ................... 7 2.2 Related Work ............................ 8 2.2.1 Program Comprehension . ............... 8 2.2.2 Clone Detection ...................... 14 2.2.3 Program Similarity Evaluation Techniques ...... 16 2.2.4 Reverse Engineering Techniques ............ 17 2.2.5 Roles of Variables ..................... 18 3. Program Comprehension and Roles of Variables, a Theoret- ical Background 19 3.1 Schemas and Beacons ....................... 19 3.2 Roles of Variables ......................... 21 3.2.1 An Example ........................ 22 3.3 The Link Between RoV and PC . ............... 23 4. Decision Tree Classifiers and the C4.5 Algorithm 27 4.1 Decision Tree Classifiers in General .............. 27 v 4.2 The C4.5 Decision Tree Classifier ................ 30 5. Overall Process and Common Characteristics 31 5.1 Overall Process .......................... 31 5.2 Common Characteristics ..................... 35 5.2.1 Computing Characteristics ............... 36 5.3 The Tool
Recommended publications
  • Sorting Algorithm 1 Sorting Algorithm
    Sorting algorithm 1 Sorting algorithm A sorting algorithm is an algorithm that puts elements of a list in a certain order. The most-used orders are numerical order and lexicographical order. Efficient sorting is important for optimizing the use of other algorithms (such as search and merge algorithms) which require input data to be in sorted lists; it is also often useful for canonicalizing data and for producing human-readable output. More formally, the output must satisfy two conditions: 1. The output is in nondecreasing order (each element is no smaller than the previous element according to the desired total order); 2. The output is a permutation (reordering) of the input. Since the dawn of computing, the sorting problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently despite its simple, familiar statement. For example, bubble sort was analyzed as early as 1956.[1] Although many consider it a solved problem, useful new sorting algorithms are still being invented (for example, library sort was first published in 2006). Sorting algorithms are prevalent in introductory computer science classes, where the abundance of algorithms for the problem provides a gentle introduction to a variety of core algorithm concepts, such as big O notation, divide and conquer algorithms, data structures, randomized algorithms, best, worst and average case analysis, time-space tradeoffs, and upper and lower bounds. Classification Sorting algorithms are often classified by: • Computational complexity (worst, average and best behavior) of element comparisons in terms of the size of the list (n). For typical serial sorting algorithms good behavior is O(n log n), with parallel sort in O(log2 n), and bad behavior is O(n2).
    [Show full text]
  • PART III Basic Data Types
    PART III Basic Data Types 186 Chapter 7 Using Numeric Types Two kinds of number representations, integer and floating point, are supported by C. The various integer types in C provide exact representations of the mathematical concept of “integer” but can represent values in only a limited range. The floating-point types in C are used to represent the mathematical type “real.” They can represent real numbers over a very large range of magnitudes, but each number generally is an approximation, using a limited number of decimal places of precision. In this chapter, we define and explain the integer and floating-point data types built into C and show how to write their literal forms and I/O formats. We discuss the range of values that can be stored in each type, how to perform reliable arithmetic computations with these values, what happens when a number is converted (or cast) from one type to another, and how to choose the proper data type for a problem. We would like to think of numbers as integer values, not as patterns of bits in memory. This is possible most of the time when working with C because the language lets us name the numbers and compute with them symbolically. Details such as the length (in bytes) of the number and the arrangement of bits in those bytes can be ignored most of the time. However, inside the computer, the numbers are just bit patterns. This becomes evident when conditions such as integer overflow occur and a “correct” formula produces a wrong and meaningless answer.
    [Show full text]
  • Applied C and C++ Programming
    Applied C and C++ Programming Alice E. Fischer David W. Eggert University of New Haven Michael J. Fischer Yale University August 2018 Copyright c 2018 by Alice E. Fischer, David W. Eggert, and Michael J. Fischer All rights reserved. This manuscript may be used freely by teachers and students in classes at the University of New Haven and at Yale University. Otherwise, no part of it may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the authors. 192 Part III Basic Data Types 193 Chapter 7 Using Numeric Types Two kinds of number representations, integer and floating point, are supported by C. The various integer types in C provide exact representations of the mathematical concept of \integer" but can represent values in only a limited range. The floating-point types in C are used to represent the mathematical type \real." They can represent real numbers over a very large range of magnitudes, but each number generally is an approximation, using a limited number of decimal places of precision. In this chapter, we define and explain the integer and floating-point data types built into C and show how to write their literal forms and I/O formats. We discuss the range of values that can be stored in each type, how to perform reliable arithmetic computations with these values, what happens when a number is converted (or cast) from one type to another, and how to choose the proper data type for a problem.
    [Show full text]
  • References and Index
    FREE References [1] E. H. L. Aarts and J. Korst. Simulated Annealing and Boltzmann Machines. Wiley, 1989. [2] J. Abello, A. Buchsbaum,and J. Westbrook. A functionalapproach to external graph algorithms. Algorithmica, 32(3):437–458, 2002. [3] W. Ackermann. Zum hilbertschen Aufbau der reellen Zahlen. Mathematische Annalen, 99:118–133, 1928. [4] G. M. Adel’son-Vel’skii and E. M. Landis. An algorithm for the organization of information. Soviet Mathematics Doklady, 3:1259–1263, 1962. [5] A. Aggarwal and J. S. Vitter. The input/output complexity of sorting and related problems. CommunicationsCOPY of the ACM, 31(9):1116–1127, 1988. [6] A. V. Aho, J. E. Hopcroft, and J. D. Ullman. The Design and Analysis of Computer Algorithms. Addison-Wesley, 1974. [7] A. V. Aho, B. W. Kernighan, and P. J. Weinberger. The AWK Programming Language. Addison-Wesley, 1988. [8] R. K. Ahuja, R. L. Magnanti, and J. B. Orlin. Network Flows. Prentice Hall, 1993. [9] R. K. Ahuja, K. Mehlhorn, J. B. Orlin, and R. E. Tarjan. Faster algorithms for the shortest path problem. Journal of the ACM, 3(2):213–223, 1990. [10] N. Alon, M. Dietzfelbinger, P. B. Miltersen, E. Petrank, and E. Tardos. Linear hash functions. Journal of the ACM, 46(5):667–683, 1999. [11] A. Andersson, T. Hagerup, S. Nilsson, and R. Raman. Sorting in linear time? Journal of Computer and System Sciences, 57(1):74–93, 1998. [12] F. Annexstein, M. Baumslag, and A. Rosenberg. Group action graphs and parallel architectures. SIAM Journal on Computing, 19(3):544–569, 1990. [13] D.
    [Show full text]
  • Engineering Burstsort: Toward Fast In-Place String Sorting
    Engineering Burstsort: Toward Fast In-Place String Sorting RANJAN SINHA and ANTHONY WIRTH The University of Melbourne Burstsort is a trie-based string sorting algorithm that distributes strings into small buckets whose contents are then sorted in cache. This approach has earlier been demonstrated to be efficient on modern cache-based processors [Sinha & Zobel, JEA 2004]. In this article, we introduce improve- ments that reduce by a significant margin the memory requirement of Burstsort: It is now less than 1% greater than an in-place algorithm. These techniques can be applied to existing variants of Burstsort, as well as other string algorithms such as for string management. We redesigned the buckets, introducing sub-buckets and an index structure for them, which resulted in an order-of-magnitude space reduction. We also show the practicality of moving some fields from the trie nodes to the insertion point (for the next string pointer) in the bucket; this technique reduces memory usage of the trie nodes by one-third. Importantly, the trade-off for the reduction in memory use is only a very slight increase in the running time of Burstsort on real- world string collections. In addition, during the bucket-sorting phase, the string suffixes are copied to a small buffer to improve their spatial locality, lowering the running time of Burstsort by up to 30%. These memory usage enhancements have enabled the copy-based approach [Sinha et al., JEA 2006] to also reduce the memory usage with negligible impact on speed. 2.5 Categories and Subject Descriptors: E.1 [Data Structures]; E.5 [Files]: Sorting/Searching General Terms: Algorithms, Design, Experimentation, Performance Additional Key Words and Phrases: Sorting, algorithms, cache, experimental algorithms, string management, tries ACM Reference Format: Sinha, R.
    [Show full text]
  • Sorting and Efficient Searching
    Chapter 6 Sorting and Efficient Searching What is in this Chapter ? Sorting is a fundamental problem-solving "tool" in computer science which can greatly affect an algorithm's efficiency. Sorting is discussed in this chapter as it pertains to the area of computer science. A few sorting strategies are discussed (i.e., bubble sort, selection sort, insertion sort and counting sort) and briefly compared. Finally, sorting is applied to the problem of simulating a fire spreading across a forest. We also discuss Binary Searching ... a more efficient way of searching through sorted data COMP1405 – Sorting and Efficient Searching Fall 2015 6.1 Sorting In addition to searching, sorting is one of the most fundamental "tools" that a programmer can use to solve problems. Sorting is the process of arranging items in some sequence and/or in different sets. In computer science, we are often presented with a list of data that needs to be sorted. For example, we may wish to sort (or arrange) a list of people. Naturally, we may imagine a list of people's names sorted by their last names. This is very common and is called a lexicographical (or alphabetical) sorting. The "way" in which we compare any two items for sorting is defined by the sort order. There are many other "sort orders" to sort a list of people. Depending on the application, we would choose the most applicable sorting order: • sort by ID numbers • sort by age • sort by height • sort by weight • sort by birth date • etc.. A list of items is just one obvious example of where sorting is often used.
    [Show full text]
  • Performance Improvement for Multi-Key Quick Sort Using Kepler Gpus
    Performance Improvement for Multi-Key Quick Sort using Kepler GPUs Bharath Shamasundar, Amoolya Shetty Ananya Rao, Supreetha Shetty, Neelima Bayyapu Department of Computer Science and Engineering Department of Computer Science and Engineering NMAM Institute of Technology NMAM Institute of Technology Udupi, India Udupi, India {bharathbs294, amushetty93}@gmail.com {ananyaraoj, supreethashetty8}@gmail.com; [email protected] Abstract — This paper presents the performance programmers making it easily programmable. One improvement obtained by multi key quick sort on such programming API that has been introduced Kepler NVIDIA GPUs. Since Multi key quick sort by NVidia for parallel computing using GPUs is is a recursion based algorithm many of the the Compute Unified Device Architecture researchers have found it laborious to parallelize (CUDA) [2-3]. the algorithm on the multi and many core With every new version of the GPU architectures. A survey of imperative string sorting architecture, there is an increased possibility of algorithm and a robust perceptivity of the Kepler applications that can take advantage of this GPU architecture gave rise to an intriguing improvement. A sorting algorithm, based on rumination of matching the template of Multi key recursion that was previously considered quick sort with the dynamic parallelism feature offered by the Kepler based GPU’s. The CPU incapable of being parallelized in terms of parallel implementation of parallel multi key quick GPGPU is now ported onto the latest GPU sort gives 33 to 50 percentage of improvement and architecture that supports the framework of the 62 to 75 percentage of improvement when sorting algorithm is considered in this paper. compared to 8-bit and 16-bit parallel MSD radix Some research studies have provided a sort respectively.
    [Show full text]
  • 5 Sorting and Selection
    5 Sorting and Selection Telephone directories are sorted alphabetically by last name. Why? Because a sorted index can be searched quickly. Even in the telephone directory of a hugecity, one can find a name in a few seconds. In an unsorted index, nobody would even try to find a name. This chapter teaches you how to turn an unordered collection of elements into an ordered collection, i.e., how to sort the collection. The sorted collection can then be searched fast. We will get to know several algorithms for sorting; the different algorithms are suited for different situations, for example sorting in main memory or sorting in external memory, and illustrate different algorithmic paradigms. Sorting has many other uses as well. An early example of a massive data-processing task was the statistical evaluation of census data; 1500 people needed seven years to manually process data from the US census in 1880. The engineer Herman Hollerith,1 who participated in this evaluation as a statistician, spent much of the 10 years to the next census developing counting and sorting machines for mechanizing this gigantic endeavor. Although the 1890 census had to evaluate more people and more questions, the basic evaluation was finished in 1891. Hollerith’s company continued to play an important role in the development of the information-processing industry; since 1924, it has been known as International Business Machines (IBM). Sorting is important for census statistics because one often wants to form subcollections, for example, all personsbetween age 20 and30 and living on a farm. Two applications of sorting solve the problem.
    [Show full text]
  • Lecture Notes in Computer Science 2409 Edited by G
    Lecture Notes in Computer Science 2409 Edited by G. Goos, J. Hartmanis, and J. van Leeuwen 3 Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Tokyo David M. Mount Clifford Stein (Eds.) Algorithm Engineering and Experiments 4th International Workshop, ALENEX 2002 San Francisco, CA, USA, January 4-5, 2002 Revised Papers 13 Series Editors Gerhard Goos, Karlsruhe University, Germany Juris Hartmanis, Cornell University, NY, USA Jan van Leeuwen, Utrecht University, The Netherlands Volume Editors David M. Mount University of Maryland, Department of Computer Science College Park, MD 20742, USA E-mail: [email protected] Clifford Stein Columbia University, Department of IEOR 500W. 120 St., MC 4704 New York, NY 10027, USA E-mail: [email protected] Cataloging-in-Publication Data applied for Die Deutsche Bibliothek - CIP-Einheitsaufnahme Algorithm engineering and experimentation : 4th international workshop ; revised papers / ALENEX 2002, San Francisco, CA, USA, January4-5,2002. David M. Mount ; Clifford Stein (ed.). - Berlin ; Heidelberg ; New York ; Barcelona ; Hong Kong ; London ; Milan ; Paris ; Tokyo : Springer, 2002 (Lecture notes in computer science ; Vol. 2409) ISBN 3-540-43977-3 CR Subject Classification (1998): F.2, E.1, I.3.5, G.2 ISSN 0302-9743 ISBN 3-540-43977-3 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag.
    [Show full text]
  • Algorithms and Data Structures: the Basic Toolbox
    Entwurf-Entwurf-Entwurf-Entwurf-Entwurf-Entwurf-Entwurf-Entwurf Concise Algorithmics or Algorithms and Data Structures — The Basic Toolbox or. Kurt Mehlhorn and Peter Sanders Entwurf-Entwurf-Entwurf-Entwurf-Entwurf-Entwurf-Entwurf-Entwurf Mehlhorn, Sanders June 11, 2005 iii Foreword Buy me not [25]. iv Mehlhorn, Sanders June 11, 2005 v Contents 1 Amuse Geule: Integer Arithmetics 3 1.1 Addition . 4 1.2 Multiplication: The School Method . 4 1.3 A Recursive Version of the School Method . 6 1.4 Karatsuba Multiplication . 8 1.5 Implementation Notes . 10 1.6 Further Findings . 11 2 Introduction 13 2.1 Asymptotic Notation . 14 2.2 Machine Model . 16 2.3 Pseudocode . 19 2.4 Designing Correct Programs . 23 2.5 Basic Program Analysis . 25 2.6 Average Case Analysis and Randomized Algorithms . 29 2.7 Data Structures for Sets and Sequences . 32 2.8 Graphs . 32 2.9 Implementation Notes . 37 2.10 Further Findings . 38 3 Representing Sequences by Arrays and Linked Lists 39 3.1 Unbounded Arrays . 40 3.2 Linked Lists . 45 3.3 Stacks and Queues . 51 3.4 Lists versus Arrays . 54 3.5 Implementation Notes . 56 3.6 Further Findings . 57 vi CONTENTS CONTENTS vii 4 Hash Tables 59 9 Graph Traversal 157 4.1 Hashing with Chaining . 62 9.1 Breadth First Search . 158 4.2 Universal Hash Functions . 63 9.2 Depth First Search . 159 4.3 Hashing with Linear Probing . 67 9.3 Implementation Notes . 165 4.4 Chaining Versus Linear Probing . 70 9.4 Further Findings . 166 4.5 Implementation Notes . 70 4.6 Further Findings .
    [Show full text]
  • Algorithms and Data Structures Kurt Mehlhorn • Peter Sanders
    Algorithms and Data Structures Kurt Mehlhorn • Peter Sanders Algorithms and Data Structures The Basic Toolbox Prof. Dr. Kurt Mehlhorn Prof. Dr. Peter Sanders Max-Planck-Institut für Informatik Universität Karlsruhe Saarbrücken Germany Germany [email protected] [email protected] ISBN 978-3-540-77977-3 e-ISBN 978-3-540-77978-0 DOI 10.1007/978-3-540-77978-0 Library of Congress Control Number: 2008926816 ACM Computing Classification (1998): F.2, E.1, E.2, G.2, B.2, D.1, I.2.8 c 2008 Springer-Verlag Berlin Heidelberg This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Cover design: KünkelLopka GmbH, Heidelberg Printed on acid-free paper 987654321 springer.com To all algorithmicists Preface Algorithms are at the heart of every nontrivial computer application. Therefore every computer scientist and every professional programmer should know about the basic algorithmic toolbox: structures that allow efficient organization and retrieval of data, frequently used algorithms, and generic techniques for modeling, understanding, and solving algorithmic problems.
    [Show full text]
  • Symmetry Sort
    Symmetry Partition Sort* (Preliminary Draft) Jing-Chao Chen Email: [email protected] School of informatics, DongHua University 1882 Yan-an West Road, Shanghai, 200051, P.R. China (June 1, 2007) SUMMARY In this paper, we propose a useful replacement for quicksort-style utility functions. The replacement is called Symmetry Partition Sort, which has essentially the same principle as Proportion Extend Sort. The maximal difference between them is that the new algorithm always places already partially sorted inputs (used as a basis for the proportional extension) on both ends when entering the partition routine. This is advantageous to speeding up the partition routine. The library function based on the new algorithm is more attractive than Psort which is a library function introduced in 2004. Its implementation mechanism is simple. The source code is clearer. The speed is faster, with O(n log n) performance guarantee. Both the robustness and adaptivity are better. As a library function, it is competitive. KEYWORDS: Proportion extend sort; High performance sorting; Algorithm design and implementation; Library sort functions. INTRODUCTION It has been all along one of the studied problems how to devise a better library sort function. A large amount of library sort functions have been proposed. Most of them were derived from Quicksort [1] due to Hoare. The Seventh Edition [9] and the 1983 Berkeley function in Unix are the early library function based on Quicksort. The late one is Bentley and McIlroy’s Quicksort (BM qsort, for short) [2] proposed in 1993. Whether the early one or the late one, in theory, they consume possibly quadratic time [2,11].
    [Show full text]