Lecture Notes in Computer Science 2409 Edited by G

Total Page:16

File Type:pdf, Size:1020Kb

Lecture Notes in Computer Science 2409 Edited by G Lecture Notes in Computer Science 2409 Edited by G. Goos, J. Hartmanis, and J. van Leeuwen 3 Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Tokyo David M. Mount Clifford Stein (Eds.) Algorithm Engineering and Experiments 4th International Workshop, ALENEX 2002 San Francisco, CA, USA, January 4-5, 2002 Revised Papers 13 Series Editors Gerhard Goos, Karlsruhe University, Germany Juris Hartmanis, Cornell University, NY, USA Jan van Leeuwen, Utrecht University, The Netherlands Volume Editors David M. Mount University of Maryland, Department of Computer Science College Park, MD 20742, USA E-mail: [email protected] Clifford Stein Columbia University, Department of IEOR 500W. 120 St., MC 4704 New York, NY 10027, USA E-mail: [email protected] Cataloging-in-Publication Data applied for Die Deutsche Bibliothek - CIP-Einheitsaufnahme Algorithm engineering and experimentation : 4th international workshop ; revised papers / ALENEX 2002, San Francisco, CA, USA, January4-5,2002. David M. Mount ; Clifford Stein (ed.). - Berlin ; Heidelberg ; New York ; Barcelona ; Hong Kong ; London ; Milan ; Paris ; Tokyo : Springer, 2002 (Lecture notes in computer science ; Vol. 2409) ISBN 3-540-43977-3 CR Subject Classification (1998): F.2, E.1, I.3.5, G.2 ISSN 0302-9743 ISBN 3-540-43977-3 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer-Verlag Berlin Heidelberg New York, a member of BertelsmannSpringer Science+Business Media GmbH http://www.springer.de © Springer-Verlag Berlin Heidelberg 2002 Printed in Germany Typesetting: Camera-ready by author, data conversion by PTP-Berlin, Stefan Sossna e.K. Printed on acid-free paper SPIN: 10873691 06/3142 543210 Preface The annual workshop on Algorithm Engineering and Experiments (ALENEX) provides a forum for the presentation of original research in the implementation and experimental evaluation of algorithms and data structures. ALENEX 2002 was the fourth workshop in this series. It was held in San Francisco, California on January 4–5, 2002. This volume collects extended versions of the 15 papers that were selected for presentation from a pool of 34 submissions. We would like to thank the sponsors, authors, and reviewers who helped make ALENEX 2002 a success. We also want to thank the invited speakers, Cynthia Phillips of Sandia National Laboratories, Martin Farach-Colton of Google, and Michael Kass of Pixar. Finally, we would like to thank Springer-Verlag for pu- blishing these papers in their Lecture Notes in Computer Science series. May 2002 David M. Mount Clifford Stein ALENEX 2002 Sponsors The following organizations provided direct financial support, which enabled us to host invited speakers and provide reduced registration fees for students. • Sandia National Laboratories • Akami Technologies Inc. • NEC Research The following provided in-kind support, facilitating the workshop. • SIAM, the Society for Industrial and Applied Mathematics • SIGACT, the ACM SIG on Algorithms and Computation Theory • Columbia University ALENEX 2002 Program Committee Nancy Amato (Texas A&M University) Marshall Bern (Xerox PC) Michael Goodrich (University of California, Irvine) Tom McCormick (University of British Columbia) Michael Mitzenmacher (Harvard University) David Mount (University of Maryland; Co-chair) Giri Narasimhan (Florida International University) Rajeev Raman (University of Leicester) Clifford Stein (Columbia University; Co-chair) VI ALENEX 2002 ALENEX 2002 Steering Committee Michael Goodrich (University of California, Irvine) Adam Buchsbaum (AT&T Labs) Roberto Battiti (University of Trento, Italy) Andrew V. Goldberg (Intertrust ST Lab) Michael T. Goodrich (University of California, Irvine) David S. Johnson (AT&T Bell Laboratories) Catherine C. McGeoch (Amherst College) Bernard M.E. Moret (University of New Mexico; chair) Jack Snoeyink (UNC-Chapel Hill) Table of Contents ALENEX 2002 On the Implementation of MST-Based Heuristics for the Steiner Problem in Graphs ......................................................... 1 M. Poggi de Arag˜ao,R.F. Werneck (Catholic University of Rio de Janeiro) A Time-Sensitive System for Black-Box Combinatorial Optimization ..... 16 V. Phan, P. Sumazin, S. Skiena (SUNY Stony Brook) A Compressed Breadth-First Search for Satisfiability ................... 29 D.B. Motter, I.L. Markov (University of Michigan) Using Multi-level Graphs for Timetable Information in Railway Systems .. 43 F. Schulz, D. Wagner (University of Konstanz), C. Zaroliagis (University of Patras) Evaluating the Local Ratio Algorithm for Dynamic Storage Allocation ... 60 K. Pruhs (University of Pittsburgh), E. Wiewiora (University of California, San Diego) An Experimental Study of Prefetching and Caching Algorithms for the World Wide Web .................................................. 71 M. Curcio, S. Leonardi, A. Vitaletti (Universit`adi Roma “La Sapienza”) The Treewidth of Java Programs ..................................... 86 J. Gustedt (LORIA & INRIA Lorraine), O.A. Mæhle, J.A. Telle (University of Bergen) Partitioning Planar Graphs with Costs and Weights .................... 98 L. Aleksandrov (Bulgarian Academy of Sciences, Carleton University), H. Djidjev (University of Warwick), H. Guo, A. Maheshwari (Carleton University) Maintaining Dynamic Minimum Spanning Trees: An Experimental Study . 111 G. Cattaneo, P. Faruolo, U.F. Petrillo (Universit`adi Salerno), G.F. Italiano (Universit`a di Roma “Tor Vergata”) Experimental Evaluation of a New Shortest Path Algorithm ............126 S. Pettie, V. Ramachandran, S. Sridhar (University of Texas at Austin) Getting More from Out-of-Core Columnsort ...........................143 G. Chaudhry, T. Cormen (Dartmouth College) VIII Table of Contents Topological Sweep in Degenerate Cases ...............................5 E. Rafalin, D. Souvaineufts University), I. Streinu (Smith College) Acceleration of K-Means and Related Clustering Algorithms.............166 S.J. Phillips (AT&T Labs-Research) STAR-Tree: An Efficient Self-Adjusting Index for Moving Objects .......178 C.M. Procopiuc (AT&T Research Lab), P.K. Agarwal (Duke University), S. Har-Peled (University of Illinois) An Improvement on Tree Selection Sort ...............................194 J. Chen (Bell Labs Research, Beijing) Author Index ....................................................207 On the Implementation of MST-Based Heuristics for the Steiner Problem in Graphs Marcus Poggi de Arag˜aoand Renato F. Werneck Department of Informatics, Catholic University of Rio de Janeiro, R. Marquˆesde S˜ao Vicente, 225, Rio de Janeiro, RJ, 22453-900, Brazil. [email protected], [email protected] Abstract. Some of the most widely used constructive heuristics for the Steiner Problem in Graphs are based on algorithms for the Minimum Spanning Tree problem. In this paper, we examine efficient implemen- tations of heuristics based on the classic algorithms by Prim, Kruskal, and Bor˚uvka. An extensive experimental study indicates that the the- oretical worst-case complexity of the algorithms give little information about their behavior in practice. Careful implementation improves aver- age computation times not only significantly, but asymptotically. Run- ning times for our implementations are within a small constant factor from that of Pris algorithm for the Minimum Spanning Tree problem, suggesting that there is little room for improvement. 1 Introduction The Steiner Problem in Graphs (SPG) can be stated as follows: given an undi- rected graph G =(V,E), a set ofterminals T ⊆ V , and a positive cost function c(v, w) for all (v, w) ∈ E, find a minimum weight connected subgraph of G con- taining all vertices in T . This is one ofthe most studied NP-hard problems. A wealth oflinks to recent papers on this subject can be foundin [5]. In this paper, we are especially interested in constructive heuristics, algo- rithms that build feasible solutions from scratch. They can be used as stand-alone algorithms, providing solutions ofreasonable quality within a short amount of time. However, their application is much broader. They are often used as sub- routines ofmore elaborate primal heuristics [2,10], dual heuristics [8], and exact algorithms [4,8]. Regardless ofthe application, constructive heuristics should be as fast as possible, while preserving solution quality. A number ofsuch heuristics are described in the literature (see [3,12] for surveys). We focus our attention on heuristics that are direct extensions of exact algorithms for the Minimum Spanning Tree (MST) problem. The main contribu- tion of this paper are new, fast implementations for these methods. This includes not only versions ofthe traditional methods based on Pris and Kruskas al- gorithms, but also a Bor˚uvka-based heuristic, a natural extension that has not Current addres Department of Computer Science, Princeton University, Princeton, NJ, 08544. D. Mount and C. Stein (Eds.): ALENEX 2002, LNCS 2409, pp. 1–15, 2002. c Springer-Verlag Berlin Heidelberg 2002 2 M. Poggi
Recommended publications
  • Sorting Algorithm 1 Sorting Algorithm
    Sorting algorithm 1 Sorting algorithm A sorting algorithm is an algorithm that puts elements of a list in a certain order. The most-used orders are numerical order and lexicographical order. Efficient sorting is important for optimizing the use of other algorithms (such as search and merge algorithms) which require input data to be in sorted lists; it is also often useful for canonicalizing data and for producing human-readable output. More formally, the output must satisfy two conditions: 1. The output is in nondecreasing order (each element is no smaller than the previous element according to the desired total order); 2. The output is a permutation (reordering) of the input. Since the dawn of computing, the sorting problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently despite its simple, familiar statement. For example, bubble sort was analyzed as early as 1956.[1] Although many consider it a solved problem, useful new sorting algorithms are still being invented (for example, library sort was first published in 2006). Sorting algorithms are prevalent in introductory computer science classes, where the abundance of algorithms for the problem provides a gentle introduction to a variety of core algorithm concepts, such as big O notation, divide and conquer algorithms, data structures, randomized algorithms, best, worst and average case analysis, time-space tradeoffs, and upper and lower bounds. Classification Sorting algorithms are often classified by: • Computational complexity (worst, average and best behavior) of element comparisons in terms of the size of the list (n). For typical serial sorting algorithms good behavior is O(n log n), with parallel sort in O(log2 n), and bad behavior is O(n2).
    [Show full text]
  • Automatic Algorithm Recognition Based on Programming Schemas and Beacons Aalto University
    Department of Computer Science and Engineering Aalto- Ahmad Taherkhani Taherkhani Ahmad DD 17 Automatic Algorithm / 2013 2013 Recognition Based on Automatic Algorithm Recognition Based on Programming Schemas and Beacons and Beacons Schemas Programming on Based Recognition Algorithm Automatic Programming Schemas and Beacons A Supervised Machine Learning Classification Approach Ahmad Taherkhani 9HSTFMG*aejijc+ ISBN 978-952-60-4989-2 BUSINESS + ISBN 978-952-60-4990-8 (pdf) ECONOMY ISSN-L 1799-4934 ISSN 1799-4934 ART + ISSN 1799-4942 (pdf) DESIGN + ARCHITECTURE University Aalto Aalto University School of Science SCIENCE + Department of Computer Science and Engineering TECHNOLOGY www.aalto.fi CROSSOVER DOCTORAL DOCTORAL DISSERTATIONS DISSERTATIONS Aalto University publication series DOCTORAL DISSERTATIONS 17/2013 Automatic Algorithm Recognition Based on Programming Schemas and Beacons A Supervised Machine Learning Classification Approach Ahmad Taherkhani A doctoral dissertation completed for the degree of Doctor of Science (Technology) (Doctor of Philosophy) to be defended, with the permission of the Aalto University School of Science, at a public examination held at the lecture hall T2 of the school on the 8th of March 2013 at 12 noon. Aalto University School of Science Department of Computer Science and Engineering Learning + Technology Group Supervising professor Professor Lauri Malmi Thesis advisor D. Sc. (Tech) Ari Korhonen Preliminary examiners Professor Jorma Sajaniemi, University of Eastern Finland, Finland Dr. Colin Johnson, University of Kent, United Kingdom Opponent Professor Tapio Salakoski, University of Turku, Finland Aalto University publication series DOCTORAL DISSERTATIONS 17/2013 © Ahmad Taherkhani ISBN 978-952-60-4989-2 (printed) ISBN 978-952-60-4990-8 (pdf) ISSN-L 1799-4934 ISSN 1799-4934 (printed) ISSN 1799-4942 (pdf) http://urn.fi/URN:ISBN:978-952-60-4990-8 Unigrafia Oy Helsinki 2013 Finland Publication orders (printed book): The dissertation can be read at http://lib.tkk.fi/Diss/ Abstract Aalto University, P.O.
    [Show full text]
  • PART III Basic Data Types
    PART III Basic Data Types 186 Chapter 7 Using Numeric Types Two kinds of number representations, integer and floating point, are supported by C. The various integer types in C provide exact representations of the mathematical concept of “integer” but can represent values in only a limited range. The floating-point types in C are used to represent the mathematical type “real.” They can represent real numbers over a very large range of magnitudes, but each number generally is an approximation, using a limited number of decimal places of precision. In this chapter, we define and explain the integer and floating-point data types built into C and show how to write their literal forms and I/O formats. We discuss the range of values that can be stored in each type, how to perform reliable arithmetic computations with these values, what happens when a number is converted (or cast) from one type to another, and how to choose the proper data type for a problem. We would like to think of numbers as integer values, not as patterns of bits in memory. This is possible most of the time when working with C because the language lets us name the numbers and compute with them symbolically. Details such as the length (in bytes) of the number and the arrangement of bits in those bytes can be ignored most of the time. However, inside the computer, the numbers are just bit patterns. This becomes evident when conditions such as integer overflow occur and a “correct” formula produces a wrong and meaningless answer.
    [Show full text]
  • Applied C and C++ Programming
    Applied C and C++ Programming Alice E. Fischer David W. Eggert University of New Haven Michael J. Fischer Yale University August 2018 Copyright c 2018 by Alice E. Fischer, David W. Eggert, and Michael J. Fischer All rights reserved. This manuscript may be used freely by teachers and students in classes at the University of New Haven and at Yale University. Otherwise, no part of it may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the authors. 192 Part III Basic Data Types 193 Chapter 7 Using Numeric Types Two kinds of number representations, integer and floating point, are supported by C. The various integer types in C provide exact representations of the mathematical concept of \integer" but can represent values in only a limited range. The floating-point types in C are used to represent the mathematical type \real." They can represent real numbers over a very large range of magnitudes, but each number generally is an approximation, using a limited number of decimal places of precision. In this chapter, we define and explain the integer and floating-point data types built into C and show how to write their literal forms and I/O formats. We discuss the range of values that can be stored in each type, how to perform reliable arithmetic computations with these values, what happens when a number is converted (or cast) from one type to another, and how to choose the proper data type for a problem.
    [Show full text]
  • References and Index
    FREE References [1] E. H. L. Aarts and J. Korst. Simulated Annealing and Boltzmann Machines. Wiley, 1989. [2] J. Abello, A. Buchsbaum,and J. Westbrook. A functionalapproach to external graph algorithms. Algorithmica, 32(3):437–458, 2002. [3] W. Ackermann. Zum hilbertschen Aufbau der reellen Zahlen. Mathematische Annalen, 99:118–133, 1928. [4] G. M. Adel’son-Vel’skii and E. M. Landis. An algorithm for the organization of information. Soviet Mathematics Doklady, 3:1259–1263, 1962. [5] A. Aggarwal and J. S. Vitter. The input/output complexity of sorting and related problems. CommunicationsCOPY of the ACM, 31(9):1116–1127, 1988. [6] A. V. Aho, J. E. Hopcroft, and J. D. Ullman. The Design and Analysis of Computer Algorithms. Addison-Wesley, 1974. [7] A. V. Aho, B. W. Kernighan, and P. J. Weinberger. The AWK Programming Language. Addison-Wesley, 1988. [8] R. K. Ahuja, R. L. Magnanti, and J. B. Orlin. Network Flows. Prentice Hall, 1993. [9] R. K. Ahuja, K. Mehlhorn, J. B. Orlin, and R. E. Tarjan. Faster algorithms for the shortest path problem. Journal of the ACM, 3(2):213–223, 1990. [10] N. Alon, M. Dietzfelbinger, P. B. Miltersen, E. Petrank, and E. Tardos. Linear hash functions. Journal of the ACM, 46(5):667–683, 1999. [11] A. Andersson, T. Hagerup, S. Nilsson, and R. Raman. Sorting in linear time? Journal of Computer and System Sciences, 57(1):74–93, 1998. [12] F. Annexstein, M. Baumslag, and A. Rosenberg. Group action graphs and parallel architectures. SIAM Journal on Computing, 19(3):544–569, 1990. [13] D.
    [Show full text]
  • Engineering Burstsort: Toward Fast In-Place String Sorting
    Engineering Burstsort: Toward Fast In-Place String Sorting RANJAN SINHA and ANTHONY WIRTH The University of Melbourne Burstsort is a trie-based string sorting algorithm that distributes strings into small buckets whose contents are then sorted in cache. This approach has earlier been demonstrated to be efficient on modern cache-based processors [Sinha & Zobel, JEA 2004]. In this article, we introduce improve- ments that reduce by a significant margin the memory requirement of Burstsort: It is now less than 1% greater than an in-place algorithm. These techniques can be applied to existing variants of Burstsort, as well as other string algorithms such as for string management. We redesigned the buckets, introducing sub-buckets and an index structure for them, which resulted in an order-of-magnitude space reduction. We also show the practicality of moving some fields from the trie nodes to the insertion point (for the next string pointer) in the bucket; this technique reduces memory usage of the trie nodes by one-third. Importantly, the trade-off for the reduction in memory use is only a very slight increase in the running time of Burstsort on real- world string collections. In addition, during the bucket-sorting phase, the string suffixes are copied to a small buffer to improve their spatial locality, lowering the running time of Burstsort by up to 30%. These memory usage enhancements have enabled the copy-based approach [Sinha et al., JEA 2006] to also reduce the memory usage with negligible impact on speed. 2.5 Categories and Subject Descriptors: E.1 [Data Structures]; E.5 [Files]: Sorting/Searching General Terms: Algorithms, Design, Experimentation, Performance Additional Key Words and Phrases: Sorting, algorithms, cache, experimental algorithms, string management, tries ACM Reference Format: Sinha, R.
    [Show full text]
  • Sorting and Efficient Searching
    Chapter 6 Sorting and Efficient Searching What is in this Chapter ? Sorting is a fundamental problem-solving "tool" in computer science which can greatly affect an algorithm's efficiency. Sorting is discussed in this chapter as it pertains to the area of computer science. A few sorting strategies are discussed (i.e., bubble sort, selection sort, insertion sort and counting sort) and briefly compared. Finally, sorting is applied to the problem of simulating a fire spreading across a forest. We also discuss Binary Searching ... a more efficient way of searching through sorted data COMP1405 – Sorting and Efficient Searching Fall 2015 6.1 Sorting In addition to searching, sorting is one of the most fundamental "tools" that a programmer can use to solve problems. Sorting is the process of arranging items in some sequence and/or in different sets. In computer science, we are often presented with a list of data that needs to be sorted. For example, we may wish to sort (or arrange) a list of people. Naturally, we may imagine a list of people's names sorted by their last names. This is very common and is called a lexicographical (or alphabetical) sorting. The "way" in which we compare any two items for sorting is defined by the sort order. There are many other "sort orders" to sort a list of people. Depending on the application, we would choose the most applicable sorting order: • sort by ID numbers • sort by age • sort by height • sort by weight • sort by birth date • etc.. A list of items is just one obvious example of where sorting is often used.
    [Show full text]
  • Performance Improvement for Multi-Key Quick Sort Using Kepler Gpus
    Performance Improvement for Multi-Key Quick Sort using Kepler GPUs Bharath Shamasundar, Amoolya Shetty Ananya Rao, Supreetha Shetty, Neelima Bayyapu Department of Computer Science and Engineering Department of Computer Science and Engineering NMAM Institute of Technology NMAM Institute of Technology Udupi, India Udupi, India {bharathbs294, amushetty93}@gmail.com {ananyaraoj, supreethashetty8}@gmail.com; [email protected] Abstract — This paper presents the performance programmers making it easily programmable. One improvement obtained by multi key quick sort on such programming API that has been introduced Kepler NVIDIA GPUs. Since Multi key quick sort by NVidia for parallel computing using GPUs is is a recursion based algorithm many of the the Compute Unified Device Architecture researchers have found it laborious to parallelize (CUDA) [2-3]. the algorithm on the multi and many core With every new version of the GPU architectures. A survey of imperative string sorting architecture, there is an increased possibility of algorithm and a robust perceptivity of the Kepler applications that can take advantage of this GPU architecture gave rise to an intriguing improvement. A sorting algorithm, based on rumination of matching the template of Multi key recursion that was previously considered quick sort with the dynamic parallelism feature offered by the Kepler based GPU’s. The CPU incapable of being parallelized in terms of parallel implementation of parallel multi key quick GPGPU is now ported onto the latest GPU sort gives 33 to 50 percentage of improvement and architecture that supports the framework of the 62 to 75 percentage of improvement when sorting algorithm is considered in this paper. compared to 8-bit and 16-bit parallel MSD radix Some research studies have provided a sort respectively.
    [Show full text]
  • 5 Sorting and Selection
    5 Sorting and Selection Telephone directories are sorted alphabetically by last name. Why? Because a sorted index can be searched quickly. Even in the telephone directory of a hugecity, one can find a name in a few seconds. In an unsorted index, nobody would even try to find a name. This chapter teaches you how to turn an unordered collection of elements into an ordered collection, i.e., how to sort the collection. The sorted collection can then be searched fast. We will get to know several algorithms for sorting; the different algorithms are suited for different situations, for example sorting in main memory or sorting in external memory, and illustrate different algorithmic paradigms. Sorting has many other uses as well. An early example of a massive data-processing task was the statistical evaluation of census data; 1500 people needed seven years to manually process data from the US census in 1880. The engineer Herman Hollerith,1 who participated in this evaluation as a statistician, spent much of the 10 years to the next census developing counting and sorting machines for mechanizing this gigantic endeavor. Although the 1890 census had to evaluate more people and more questions, the basic evaluation was finished in 1891. Hollerith’s company continued to play an important role in the development of the information-processing industry; since 1924, it has been known as International Business Machines (IBM). Sorting is important for census statistics because one often wants to form subcollections, for example, all personsbetween age 20 and30 and living on a farm. Two applications of sorting solve the problem.
    [Show full text]
  • Algorithms and Data Structures: the Basic Toolbox
    Entwurf-Entwurf-Entwurf-Entwurf-Entwurf-Entwurf-Entwurf-Entwurf Concise Algorithmics or Algorithms and Data Structures — The Basic Toolbox or. Kurt Mehlhorn and Peter Sanders Entwurf-Entwurf-Entwurf-Entwurf-Entwurf-Entwurf-Entwurf-Entwurf Mehlhorn, Sanders June 11, 2005 iii Foreword Buy me not [25]. iv Mehlhorn, Sanders June 11, 2005 v Contents 1 Amuse Geule: Integer Arithmetics 3 1.1 Addition . 4 1.2 Multiplication: The School Method . 4 1.3 A Recursive Version of the School Method . 6 1.4 Karatsuba Multiplication . 8 1.5 Implementation Notes . 10 1.6 Further Findings . 11 2 Introduction 13 2.1 Asymptotic Notation . 14 2.2 Machine Model . 16 2.3 Pseudocode . 19 2.4 Designing Correct Programs . 23 2.5 Basic Program Analysis . 25 2.6 Average Case Analysis and Randomized Algorithms . 29 2.7 Data Structures for Sets and Sequences . 32 2.8 Graphs . 32 2.9 Implementation Notes . 37 2.10 Further Findings . 38 3 Representing Sequences by Arrays and Linked Lists 39 3.1 Unbounded Arrays . 40 3.2 Linked Lists . 45 3.3 Stacks and Queues . 51 3.4 Lists versus Arrays . 54 3.5 Implementation Notes . 56 3.6 Further Findings . 57 vi CONTENTS CONTENTS vii 4 Hash Tables 59 9 Graph Traversal 157 4.1 Hashing with Chaining . 62 9.1 Breadth First Search . 158 4.2 Universal Hash Functions . 63 9.2 Depth First Search . 159 4.3 Hashing with Linear Probing . 67 9.3 Implementation Notes . 165 4.4 Chaining Versus Linear Probing . 70 9.4 Further Findings . 166 4.5 Implementation Notes . 70 4.6 Further Findings .
    [Show full text]
  • Algorithms and Data Structures Kurt Mehlhorn • Peter Sanders
    Algorithms and Data Structures Kurt Mehlhorn • Peter Sanders Algorithms and Data Structures The Basic Toolbox Prof. Dr. Kurt Mehlhorn Prof. Dr. Peter Sanders Max-Planck-Institut für Informatik Universität Karlsruhe Saarbrücken Germany Germany [email protected] [email protected] ISBN 978-3-540-77977-3 e-ISBN 978-3-540-77978-0 DOI 10.1007/978-3-540-77978-0 Library of Congress Control Number: 2008926816 ACM Computing Classification (1998): F.2, E.1, E.2, G.2, B.2, D.1, I.2.8 c 2008 Springer-Verlag Berlin Heidelberg This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Cover design: KünkelLopka GmbH, Heidelberg Printed on acid-free paper 987654321 springer.com To all algorithmicists Preface Algorithms are at the heart of every nontrivial computer application. Therefore every computer scientist and every professional programmer should know about the basic algorithmic toolbox: structures that allow efficient organization and retrieval of data, frequently used algorithms, and generic techniques for modeling, understanding, and solving algorithmic problems.
    [Show full text]
  • Symmetry Sort
    Symmetry Partition Sort* (Preliminary Draft) Jing-Chao Chen Email: [email protected] School of informatics, DongHua University 1882 Yan-an West Road, Shanghai, 200051, P.R. China (June 1, 2007) SUMMARY In this paper, we propose a useful replacement for quicksort-style utility functions. The replacement is called Symmetry Partition Sort, which has essentially the same principle as Proportion Extend Sort. The maximal difference between them is that the new algorithm always places already partially sorted inputs (used as a basis for the proportional extension) on both ends when entering the partition routine. This is advantageous to speeding up the partition routine. The library function based on the new algorithm is more attractive than Psort which is a library function introduced in 2004. Its implementation mechanism is simple. The source code is clearer. The speed is faster, with O(n log n) performance guarantee. Both the robustness and adaptivity are better. As a library function, it is competitive. KEYWORDS: Proportion extend sort; High performance sorting; Algorithm design and implementation; Library sort functions. INTRODUCTION It has been all along one of the studied problems how to devise a better library sort function. A large amount of library sort functions have been proposed. Most of them were derived from Quicksort [1] due to Hoare. The Seventh Edition [9] and the 1983 Berkeley function in Unix are the early library function based on Quicksort. The late one is Bentley and McIlroy’s Quicksort (BM qsort, for short) [2] proposed in 1993. Whether the early one or the late one, in theory, they consume possibly quadratic time [2,11].
    [Show full text]