Algorithms in a Nutshell 2E

Total Page:16

File Type:pdf, Size:1020Kb

Algorithms in a Nutshell 2E www.it-ebooks.info SECOND EDITION Algorithms in a Nutshell 2E George T. Heineman, Gary Pollice, and Stanley Selkow Boston www.it-ebooks.info Algorithms in a Nutshell 2E, Second Edition by George T. Heineman, Gary Pollice, and Stanley Selkow Copyright © 2010 George Heineman, Gary Pollice and Stanley Selkow. All rights re‐ served. Printed in the United States of America. Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. O’Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (http://safaribooksonline.com). For more information, contact our corporate/institutional sales department: 800-998-9938 or [email protected]. Editor: Mary Treseler Indexer: FIX ME! Production Editor: FIX ME! Cover Designer: Karen Montgomery Copyeditor: FIX ME! Interior Designer: David Futato Proofreader: FIX ME! Illustrator: Rebecca Demarest January -4712: Second Edition Revision History for the Second Edition: 2015-07-27: Early release revision 1 See http://oreilly.com/catalog/errata.csp?isbn=0636920032885 for release details. Nutshell Handbook, the Nutshell Handbook logo, and the O’Reilly logo are registered trademarks of O’Reilly Media, Inc. !!FILL THIS IN!! and related trade dress are trade‐ marks of O’Reilly Media, Inc. Many of the designations used by manufacturers and sellers to distinguish their prod‐ ucts are claimed as trademarks. Where those designations appear in this book, and O’Reilly Media, Inc. was aware of a trademark claim, the designations have been printed in caps or initial caps. While every precaution has been taken in the preparation of this book, the publisher and authors assume no responsibility for errors or omissions, or for damages resulting from the use of the information contained herein. ISBN: 063-6-920-03288-5 [?] www.it-ebooks.info Table of Contents 1. Thinking Algorithmically. 1 Understand the Problem 1 Naive Solution 3 Intelligent Approaches 4 Greedy 4 Divide and Conquer 5 Parallel 5 Approximation 6 Generalization 7 Summary 8 2. The Mathematics of Algorithms. 9 Size of a Problem Instance 9 Rate of Growth of Functions 10 Analysis in the Best, Average, and Worst Cases 15 Worst Case 18 Average Case 18 Best Case 19 Performance Families 20 Constant Behavior 20 Log n Behavior 21 Sublinear O(nd) Behavior for d < 1 23 Linear Performance 23 n log n Performance 27 Quadratic Performance 28 Less Obvious Performance Computations 30 Exponential Performance 33 Benchmark Operations 33 iii www.it-ebooks.info Lower and Upper Bounds 36 References 36 3. Algorithm Building Blocks. 37 Algorithm Template Format 37 Name 38 Input/Output 38 Context 38 Solution 38 Analysis 38 Variations 39 Pseudocode Template Format 39 Empirical Evaluation Format 40 Floating-Point Computation 40 Performance 41 Rounding Error 41 Comparing Floating Point Values 43 Special Quantities 44 Example Algorithm 45 Name and Synopsis 45 Input/Output 46 Context 46 Solution 46 Analysis 49 Common Approaches 49 Greedy 49 Divide and Conquer 50 Dynamic Programming 51 References 56 4. Sorting Algorithms. 57 Overview 57 Terminology 57 Representation 58 Comparable Elements 59 Stable Sorting 60 Criteria for Choosing a Sorting Algorithm 61 Transposition Sorting 61 Insertion Sort 61 Context 63 Solution 63 iv | Table of Contents www.it-ebooks.info Analysis 65 Selection Sort 66 Heap Sort 67 Context 72 Solution 73 Analysis 74 Variations 74 Partition-based Sorting 74 Context 80 Solution 80 Analysis 81 Variations 81 Sorting Without Comparisons 83 Bucket Sort 83 Solution 86 Analysis 88 Variations 89 Sorting with Extra Storage 90 Merge Sort 90 Input/Output 92 Solution 92 Analysis 93 Variations 94 String Benchmark Results 95 Analysis Techniques 98 References 99 5. Searching. 101 Sequential Search 102 Input/Output 103 Context 103 Solution 104 Analysis 105 Binary Search 106 Input/Output 106 Context 107 Solution 107 Analysis 108 Variations 110 Hash-based Search 111 Input/Output 113 Table of Contents | v www.it-ebooks.info Context 114 Solution 117 Analysis 119 Variations 122 Bloom Filter 127 Input/Output 129 Context 129 Solution 129 Analysis 131 Binary Search Tree 132 Input/Output 133 Context 133 Solution 135 Analysis 146 Variations 146 References 146 6. Graph Algorithms. 149 Graphs 151 Data Structure Design 154 Depth-First Search 155 Input/Output 160 Context 161 Solution 161 Analysis 163 Variations 164 Breadth-First Search 164 Input/Output 167 Context 168 Solution 168 Analysis 169 Single-Source Shortest Path 169 Input/Output 172 Solution 172 Analysis 174 Dijkstra’s Algorithm For Dense Graphs 174 Variations 177 Comparing Single Source Shortest Path Options 180 Benchmark data 181 Dense graphs 181 Sparse graphs 182 vi | Table of Contents www.it-ebooks.info All Pairs Shortest Path 183 Input/Output 186 Solution 186 Analysis 188 Minimum Spanning Tree Algorithms 188 Solution 191 Analysis 192 Variations 192 Final Thoughts on Graphs 192 Storage Issues 192 Graph Analysis 193 References 194 7. Path Finding in AI. 195 Game Trees 196 Minimax 199 Input/Output 202 Context 202 Solution 203 Analysis 205 NegMax 206 Solution 208 Analysis 210 AlphaBeta 210 Solution 214 Analysis 215 Search Trees 217 Representing State 220 Calculate available moves 221 Using Heuristic Information 221 Maximum Expansion Depth 223 Depth-First Search 223 Input/Output 225 Context 225 Solution 225 Analysis 227 Breadth-First Search 230 Input/Output 232 Context 232 Solution 233 Analysis 234 Table of Contents | vii www.it-ebooks.info A*Search 234 Input/Output 236 Context 236 Solution 239 Analysis 243 Variations 246 Comparing Search Tree Algorithms 247 References 251 8. Network Flow Algorithms. 255 Network Flow 257 Maximum Flow 259 Input/Output 261 Solution 262 Analysis 267 Optimization 268 Related Algorithms 270 Bipartite Matching 270 Input/Output 271 Solution 271 Analysis 274 Reflections on Augmenting Paths 274 Minimum Cost Flow 279 Transshipment 280 Solution 280 Transportation 283 Solution 283 Assignment 283 Solution 283 Linear Programming 283 References 285 9. Computational Geometry. 287 Classifying Problems 288 Input data 288 Computation 290 Nature of the task 291 Assumptions 291 Convex Hull 291 Convex Hull Scan 293 Input/Output 295 viii | Table of Contents www.it-ebooks.info Context 295 Solution 295 Analysis 297 Variations 299 Computing Line Segment Intersections 302 LineSweep 303 Input/Output 306 Context 306 Solution 307 Analysis 310 Variations 313 Voronoi Diagram 313 Input/Output 321 Solution 322 Analysis 327 References 328 10. Spatial Tree Structures. 329 Nearest Neighbor queries 330 Range Queries 331 Intersection Queries 331 Spatial Tree Structures 332 KD-Tree 332 Quad Tree 333 R-Tree 334 Nearest Neighbor 335 Input/Output 337 Context 338 Solution 338 Analysis 340 Variations 347 Range Query 347 Input/Output 349 Context 350 Solution 350 Analysis 351 QuadTrees 355 Input/Output 358 Solution 359 Analysis 362 Variations 363 Table of Contents | ix www.it-ebooks.info R-Trees 363 Input/Output 368 Context 368 Solution 369 Analysis 374 References 376 11. Emerging Algorithm Categories. 379 Variations on a Theme 379 Approximation Algorithms 380 Input/Output 381 Context 382 Solution 382 Analysis 384 Parallel Algorithms 386 Probabilistic Algorithms 392 Estimating the Size of a Set 392 Estimating the Size of a Search Tree 394 References 400 12. Epilogue. 401 Principle: Know Your Data 401 Principle: Decompose the Problem into Smaller Problems 402 Principle: Choose the Right Data Structure 404 Principle: Make the Space versus Time Trade-off 406 Principle: If No Solution Is Evident, Construct a Search 407 Principle: If No Solution Is Evident, Reduce Your Problem to Another Problem That Has a Solution 408 Principle: Writing Algorithms Is Hard—Testing Algorithms Is Harder 409 Principle: Accept Approximate Solution When Possible 410 Principle: Add Parallelism to Increase Performance 411 A. Benchmarking. 413 x | Table of Contents www.it-ebooks.info CHAPTER 1 Thinking Algorithmically Algorithms matter! Knowing which algorithm to apply under which set of circumstances can make a big difference in the software you produce. Let this book be your guide to learning about a number of important algorithm domains, such as sorting and searching. We will introduce a number of general approaches used by algorithms to solve problems, such as Divide and Conquer or Greedy strategy. You will be able to apply this knowledge to improve the efficiency of your own software. Data structures have been tightly tied to algorithms since the dawn of computing. In this book, you will learn the fundamental data struc‐ tures used to properly represent information for efficient processing. What do you need to do when choosing an algorithm? We’ll explore that in the following sections. Understand the Problem The first step to design an algorithm is to understand the problem you want to solve. Let’s start with a sample problem from the field of com‐ putational geometry. Given a set of points, P, in a two-dimensional plane, such as shown in Figure 1-1, picture a rubber band that has been stretched around the points and released. The resulting shape is known as the convex hull, that is, the smallest convex shape that fully encloses all points in P. 1 www.it-ebooks.info Figure 1-1. Sample set of points in plane Given a convex hull for P, any line segment drawn between any two points in P lies totally within the hull. Let’s assume that we order the points in the hull in clockwise fashion. Thus, the hull is formed by a clockwise ordering of h points L0, L1, … Lh-1 as shown in Figure 1-2. Each sequence of three hull points Li, Li+1, Li+2 creates a right turn. Figure 1-2. Computed convex hull for points With just this information, you can probably draw the convex hull for any set of points, but could you come up with an algorithm, that is, a step by step sequence of instructions, that will efficiently compute the convex hull for any set of points? 2 | Chapter 1: Thinking Algorithmically www.it-ebooks.info What we find interesting about the convex hull problem is that it doesn’t seem to be easily classified into existing algorithmic domains.
Recommended publications
  • CS 473: Algorithms, Fall 2019
    CS 473: Algorithms, Fall 2019 Universal and Perfect Hashing Lecture 10 September 26, 2019 Chandra and Michael (UIUC) cs473 1 Fall 2019 1 / 45 Today's lecture: Review pairwise independence and related constructions (Strongly) Universal hashing Perfect hashing Announcements and Overview Pset 4 released and due on Thursday, October 3 at 10am. Note one day extension over usual deadline. Midterm 1 is on Monday, Oct 7th from 7-9.30pm. More details and conflict exam information will be posted on Piazza. Next pset will be released after the midterm exam. Chandra and Michael (UIUC) cs473 2 Fall 2019 2 / 45 Announcements and Overview Pset 4 released and due on Thursday, October 3 at 10am. Note one day extension over usual deadline. Midterm 1 is on Monday, Oct 7th from 7-9.30pm. More details and conflict exam information will be posted on Piazza. Next pset will be released after the midterm exam. Today's lecture: Review pairwise independence and related constructions (Strongly) Universal hashing Perfect hashing Chandra and Michael (UIUC) cs473 2 Fall 2019 2 / 45 Part I Review Chandra and Michael (UIUC) cs473 3 Fall 2019 3 / 45 Pairwise independent random variables Definition Random variables X1; X2;:::; Xn from a range B are pairwise independent if for all 1 ≤ i < j ≤ n and for all b; b0 2 B, 0 0 Pr[Xi = b; Xj = b ] = Pr[Xi = b] · Pr[Xj = b ] : Chandra and Michael (UIUC) cs473 4 Fall 2019 4 / 45 Interesting case: n = m = p where p is a prime number Pick a; b uniformly at random from f0; 1; 2;:::; p − 1g Set Xi = ai + b Only need to store a; b.
    [Show full text]
  • Fundamental Data Structures Contents
    Fundamental Data Structures Contents 1 Introduction 1 1.1 Abstract data type ........................................... 1 1.1.1 Examples ........................................... 1 1.1.2 Introduction .......................................... 2 1.1.3 Defining an abstract data type ................................. 2 1.1.4 Advantages of abstract data typing .............................. 4 1.1.5 Typical operations ...................................... 4 1.1.6 Examples ........................................... 5 1.1.7 Implementation ........................................ 5 1.1.8 See also ............................................ 6 1.1.9 Notes ............................................. 6 1.1.10 References .......................................... 6 1.1.11 Further ............................................ 7 1.1.12 External links ......................................... 7 1.2 Data structure ............................................. 7 1.2.1 Overview ........................................... 7 1.2.2 Examples ........................................... 7 1.2.3 Language support ....................................... 8 1.2.4 See also ............................................ 8 1.2.5 References .......................................... 8 1.2.6 Further reading ........................................ 8 1.2.7 External links ......................................... 9 1.3 Analysis of algorithms ......................................... 9 1.3.1 Cost models ......................................... 9 1.3.2 Run-time analysis
    [Show full text]
  • Near-Optimal Space Perfect Hashing Algorithms
    Fabiano Cupertino Botelho Supervisor - Nivio Ziviani Near-Optimal Space Perfect Hashing Algorithms PhD. dissertation presented to the Grad- uate Program in Computer Science of the Federal University of Minas Gerais as a par- tial requirement to obtain the PhD. degree in Computer Science. Belo Horizonte September 29, 2008 To my dear wife Jana´ına. To my dear parents Maria L´ucia and Jos´eV´ıtor. To my dear sisters Gleiciane and Cristiane. Acknowledgements To God for having granted me life and wisdom to realize a dream of childhood and for the great help in difficult moments. To my dear wife Jana´ına Marcon Machado Botelho for the love, understanding by several times when I could not give her the attention she deserves, companionship and en- couragement during moments in which I desired to give up everything. Jana thank you for sharing your life with me and the victories won during the entire doctorate. With the grace of God in our lives we will continue to be very happy. To my dear parents Maria L´ucia de Lima Botelho and Jos´eVitor Botelho for sacrifices made in the past that have given support for this achievement. To my dear sisters Cristiane Cupertino Botelho and Gleiciane Cupertino Botelho for the love of the best two sisters in the world. To my dear aunt M´arcia Novaes Alves and my dear uncle Sud´ario Alves for always welcome me with affection, giving me much support throughout my doctorate. To Prof. Nivio Ziviani for the excellent work of supervision and for being an example of professionalism and dedication to work.
    [Show full text]
  • Optimal Algorithms for Minimal Perfect Hashing
    Optimal algorithms for minimal perfect hashing George Havas and Bohdan S. Majewski Key Centre for Software Technology Department of Computer Science University of Queensland Queensland 4072 Australia Abstract Minimal perfect hash functions are used for memory e±cient storage and fast retrieval of items from static sets. We present an overview of previous solutions and analyze a new algorithm based on random graphs for generating order preserving minimal perfect hash functions. We show that the new algorithm is both time and space optimal in addition to being practical. The algorithm generates a minimal perfect hash function in two steps. First a special kind of function into a random graph is computed probabilistically. Then this function is re¯ned deterministically to a minimal perfect hash function. The ¯rst step uses linear random time, while the second runs in linear deterministic time. Key words: Data structures, probabilistic algorithms, analysis of algorithms, hashing, random graphs 1 Introduction Consider a set W of m words each of which is a ¯nite string of symbols over an ordered alphabet §. A hash function is a function h : W ! I that maps the set of words W into some given interval of integers I, say [0; k¡1], where k is an integer, and usually k ¸ m. The hash function, given a word, computes an address (an integer from I) for the storage or retrieval of that item. The storage area used to store items is known as a hash table. Words for which the same address is computed are called synonyms. Due to the existence of synonyms a situation called collision may arise in which two items w1 and w2 have the same address.
    [Show full text]
  • The Tree Model for Hashing: Lower and Upper Bounds
    The Tree Model for Hashing: Lower and Upper Bounds by Joseph Gil Friedhelm Meyer auf Der Heide and A vi Wigderson Technical Report 91-23 December 1991 Department of Computer Science University of British Columbia Rm 333 - 6356 Agricultural Road Vancouver, B.C. CANADA V6T 1Z2 The Tree Model for Hashing: Lower and Upper Bounds* Joseph Gil Friedhelm Meyer auf Der Heide The University of British Columbia Universitat Paderborn A vi Wigderson The Hebrew University of Jerusalem December 1991 Abstract We define a new simple general model which captures many natural (sequential and parallel) hashing algorithms. In a game against nature, the algorithm and coin­ tosses cause the evolution of a random tree, whose size corresponds to space (hash table size), and two notions of depth correspond to the longest probe sequences for insertion (parallel insertion time) and search of a key, respectively. We stu_dy these parameters of hashing schemes by analyzing the underlying stochas­ tic process, and derive tight lower and upper bounds on the relation between the amount of memory allocated to the hashing execution and the worst case insertion time. In particular, we show that if linear memory is used then not all key inser­ tions to the hash table can be done in constant time. Except for extremely unlikely events, every input set of size n will have members for which f!(lg lg n) applications of a hash function are required. From a parallel perspective it can be said that n processors need f!(lglgn) expected time to hash themselves into O(n) space, al­ though serial algorithms exist that achieve constant amortized time for insertion, as well as constant worst case search time [16).
    [Show full text]
  • Lecture 08 Hashing
    18.10.18 ADT – asscociative array • INSERT, SEARCH, DELETE Advanced Algorithmics (6EAP) • An associative array (also associative container, MTAT.03.238 map, mapping, dictionary, finite map, and in query- Hashing processing an index or index file) is an abstract data type composed of a collection of unique keys and a collection of values, where each key is associated Jaak Vilo with one value (or set of values). The operation of 2018 Fall finding the value associated with a key is called a lookup or indexing, and this is the most important operation supported by an associative array. Jaak Vilo 1 Why? • Speed! - O(1) • Space efficient 1 18.10.18 Keys • Integers • Strings • ID-codes • Floating point nr-s(?) • Usual assumption – keys are integers. • Step 1: Map keys to integers. 2 18.10.18 3 18.10.18 Primary clustering Implementation of open addressing How do we define h(k,i) ? Linear probing: Quadratic probing: Double hashing: 4 18.10.18 Quadratic Probing Quadratic Probing • Suppose that an element should appear in bin • If one of h + i2 falls into a cluster, this does not h: imply the next one will – if bin h is occupied, then check the following sequence of bins: h + 12, h + 22, h + 32, h + 42, h + 52, ... h + 1, h + 4, h + 9, h + 16, h + 25, ... • For example, with M = 17: • Secondary clustering – all k colliding in h(k) will cluster to same locations… 5 18.10.18 Birthday Paradox • In probability theory, the birthday problem, or birthday paradox[1] pertains to the probability that in a set of randomly chosen people some pair of them will have the same birthday.
    [Show full text]
  • Data Structures
    Data structures PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Thu, 17 Nov 2011 20:55:22 UTC Contents Articles Introduction 1 Data structure 1 Linked data structure 3 Succinct data structure 5 Implicit data structure 7 Compressed data structure 8 Search data structure 9 Persistent data structure 11 Concurrent data structure 15 Abstract data types 18 Abstract data type 18 List 26 Stack 29 Queue 57 Deque 60 Priority queue 63 Map 67 Bidirectional map 70 Multimap 71 Set 72 Tree 76 Arrays 79 Array data structure 79 Row-major order 84 Dope vector 86 Iliffe vector 87 Dynamic array 88 Hashed array tree 91 Gap buffer 92 Circular buffer 94 Sparse array 109 Bit array 110 Bitboard 115 Parallel array 119 Lookup table 121 Lists 127 Linked list 127 XOR linked list 143 Unrolled linked list 145 VList 147 Skip list 149 Self-organizing list 154 Binary trees 158 Binary tree 158 Binary search tree 166 Self-balancing binary search tree 176 Tree rotation 178 Weight-balanced tree 181 Threaded binary tree 182 AVL tree 188 Red-black tree 192 AA tree 207 Scapegoat tree 212 Splay tree 216 T-tree 230 Rope 233 Top Trees 238 Tango Trees 242 van Emde Boas tree 264 Cartesian tree 268 Treap 273 B-trees 276 B-tree 276 B+ tree 287 Dancing tree 291 2-3 tree 292 2-3-4 tree 293 Queaps 295 Fusion tree 299 Bx-tree 299 Heaps 303 Heap 303 Binary heap 305 Binomial heap 311 Fibonacci heap 316 2-3 heap 321 Pairing heap 321 Beap 324 Leftist tree 325 Skew heap 328 Soft heap 331 d-ary heap 333 Tries 335 Trie
    [Show full text]
  • Choosing Best Hashing Strategies and Hash Functions
    Choosing Best Hashing Strategies and Hash Functions Thesis submitted in partial fulfillment of the requirements for the award of Degree of Master of Engineering in Software Engineering By: Name: Mahima Singh Roll No: 80731011 Under the supervision of: Dr. Deepak Garg Assistant Professor, CSED & Mr. Ravinder Kumar Lecturer, CSED COMPUTER SCIENCE AND ENGINEERING DEPARTMENT THAPAR UNIVERSITY PATIALA – 147004 JUNE 2009 Certificate I hereby certify that the work which is being presented in the thesis report entitled, “Choosing Best Hashing Strategies and Hash Functions”, submitted by me in partial fulfillment of the requirements for the award of degree of Master of Engineering in Computer Science and Engineering submitted in Computer Science and Engineering Department of Thapar University, Patiala, is an authentic record of my own work carried out under the supervision of Dr. Deepak Garg and Mr. Ravinder Kumar and refers other researcher’s works which are duly listed in the reference section. The matter presented in this thesis has not been submitted for the award of any other degree of this or any other university. (Mahima Singh) This is to certify that the above statement made by the candidate is correct and true to the best of my knowledge. (Dr. Deepak Garg) Assistant Professor, CSED Thapar University Patiala And (Mr. Ravinder Kumar) Lecturer, CSED Thapar University Patiala Countersigned by: Dr. Rajesh Kumar Bhatia (Dr. R.K.SHARMA) Assistant Professor & Head Dean (Academic Affairs) Computer Science & Engineering. Department Thapar University, Thapar University Patiala. Patiala. i Acknowledgement No volume of words is enough to express my gratitude towards my guide, Dr. Deepak Garg Assistant Professor.
    [Show full text]