Fibonacci Heaps

Total Page:16

File Type:pdf, Size:1020Kb

Fibonacci Heaps Fibonacci Heaps Outline for Today ● Recap from Last Time ● Quick refresher on binomial heaps and lazy binomial heaps. ● The Need for decrease-key ● An important operation in many graph algorithms. ● Fibonacci Heaps ● A data structure efficiently supporting decrease- key. ● Representational Issues ● Some of the challenges in Fibonacci heaps. Recap from Last Time (Lazy) Binomial Heaps ● Last time, we covered the binomial heap and a variant called the lazy binomial heap. ● These are priority queue structures designed to support efficient melding. ● Elements are stored in a collection of binomial trees. 2 1 2 0 7 4 5 5 1 3 8 7 6 9 8 Eager Binomial Heap Lazy Binomial Heap Draw what happens if we enqueue the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9 into each heap. Eager Binomial Heap 1 Lazy Binomial Heap Draw what happens if we enqueue the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9 into each heap. Eager Binomial Heap 1 2 Lazy Binomial Heap Draw what happens if we enqueue the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9 into each heap. Eager Binomial Heap 1 2 Lazy Binomial Heap Draw what happens if we enqueue the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9 into each heap. Eager Binomial Heap 1 3 2 Lazy Binomial Heap Draw what happens if we enqueue the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9 into each heap. Eager Binomial Heap 1 3 2 Lazy Binomial Heap Draw what happens if we enqueue the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9 into each heap. Eager Binomial Heap 1 3 2 4 Lazy Binomial Heap Draw what happens if we enqueue the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9 into each heap. Eager Binomial Heap 1 3 2 4 Lazy Binomial Heap Draw what happens if we enqueue the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9 into each heap. Eager Binomial Heap 1 2 3 4 Lazy Binomial Heap Draw what happens if we enqueue the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9 into each heap. Eager Binomial Heap 1 3 2 4 Lazy Binomial Heap Draw what happens if we enqueue the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9 into each heap. Eager Binomial Heap 1 3 2 4 Lazy Binomial Heap Draw what happens if we enqueue the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9 into each heap. Eager Binomial Heap 1 5 3 2 4 Lazy Binomial Heap Draw what happens if we enqueue the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9 into each heap. Eager Binomial Heap 1 5 3 2 6 4 Lazy Binomial Heap Draw what happens if we enqueue the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9 into each heap. Eager Binomial Heap 1 5 3 2 6 4 Lazy Binomial Heap Draw what happens if we enqueue the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9 into each heap. Eager Binomial Heap 1 5 3 2 6 4 Lazy Binomial Heap Draw what happens if we enqueue the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9 into each heap. Eager Binomial Heap 1 5 7 3 2 6 4 Lazy Binomial Heap Draw what happens if we enqueue the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9 into each heap. Eager Binomial Heap 1 5 7 3 2 6 8 4 Lazy Binomial Heap Draw what happens if we enqueue the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9 into each heap. Eager Binomial Heap 1 5 7 3 2 6 8 4 Lazy Binomial Heap Draw what happens if we enqueue the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9 into each heap. Eager Binomial Heap 1 5 3 2 6 4 7 8 Lazy Binomial Heap Draw what happens if we enqueue the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9 into each heap. Eager Binomial Heap 1 5 3 2 7 6 4 8 Lazy Binomial Heap Draw what happens if we enqueue the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9 into each heap. Eager Binomial Heap 1 5 3 2 7 6 4 8 Lazy Binomial Heap Draw what happens if we enqueue the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9 into each heap. Eager Binomial Heap 1 5 2 3 7 6 4 8 Lazy Binomial Heap Draw what happens if we enqueue the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9 into each heap. Eager Binomial Heap 1 5 2 3 7 6 4 8 Lazy Binomial Heap Draw what happens if we enqueue the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9 into each heap. Eager Binomial Heap 1 9 5 2 3 7 6 4 8 Lazy Binomial Heap 1 2 3 4 5 6 7 8 9 Draw what happens if we enqueue the numbers 1, 2, 3, 4, 5, 6, 7, 8, and 9 into each heap. Eager Binomial Heap 1 9 5 2 3 7 6 4 8 Lazy Binomial Heap 1 2 3 4 5 6 7 8 9 Draw what happens after performing an extract-min in each binomial heap. Eager Binomial Heap 1 9 5 2 3 7 6 4 8 Lazy Binomial Heap 1 2 3 4 5 6 7 8 9 Draw what happens after performing an extract-min in each binomial heap. Eager Binomial Heap 9 5 2 3 7 6 4 8 Lazy Binomial Heap 1 2 3 4 5 6 7 8 9 Draw what happens after performing an extract-min in each binomial heap. Eager Binomial Heap 9 5 2 3 7 6 4 8 Lazy Binomial Heap 1 2 3 4 5 6 7 8 9 Draw what happens after performing an extract-min in each binomial heap. Eager Binomial Heap 5 3 9 7 6 4 2 8 Lazy Binomial Heap 1 2 3 4 5 6 7 8 9 Draw what happens after performing an extract-min in each binomial heap. Eager Binomial Heap 5 3 9 7 6 4 2 8 Lazy Binomial Heap 1 2 3 4 5 6 7 8 9 Draw what happens after performing an extract-min in each binomial heap. Eager Binomial Heap 5 3 9 7 6 4 2 8 Lazy Binomial Heap 1 2 3 4 5 6 7 8 9 Draw what happens after performing an extract-min in each binomial heap. Eager Binomial Heap 5 3 2 7 6 4 9 8 Lazy Binomial Heap 1 2 3 4 5 6 7 8 9 Draw what happens after performing an extract-min in each binomial heap. Eager Binomial Heap 5 3 7 6 4 8 2 9 Lazy Binomial Heap 1 2 3 4 5 6 7 8 9 Draw what happens after performing an extract-min in each binomial heap. Eager Binomial Heap 5 3 7 6 4 8 2 9 Lazy Binomial Heap 1 2 3 4 5 6 7 8 9 Draw what happens after performing an extract-min in each binomial heap. Eager Binomial Heap 5 2 7 6 3 9 8 4 Lazy Binomial Heap 1 2 3 4 5 6 7 8 9 Draw what happens after performing an extract-min in each binomial heap. Eager Binomial Heap 5 2 7 6 3 9 8 4 Lazy Binomial Heap 1 2 3 4 5 6 7 8 9 Draw what happens after performing an extract-min in each binomial heap. Eager Binomial Heap 2 5 9 3 7 6 4 8 Lazy Binomial Heap 1 2 3 4 5 6 7 8 9 Draw what happens after performing an extract-min in each binomial heap. Eager Binomial Heap 2 5 9 3 7 6 4 8 Lazy Binomial Heap 1 2 3 4 5 6 7 8 9 Draw what happens after performing an extract-min in each binomial heap. Eager Binomial Heap 2 5 9 3 7 6 4 8 Lazy Binomial Heap 2 3 4 5 6 7 8 9 Draw what happens after performing an extract-min in each binomial heap. Eager Binomial Heap 2 5 9 3 7 6 4 8 Lazy Binomial Heap Order 2 Order 1 Order 0 2 3 4 5 6 7 8 9 Draw what happens after performing an extract-min in each binomial heap. Eager Binomial Heap 2 5 9 3 7 6 4 8 Lazy Binomial Heap Order 2 Order 1 Order 0 2 3 4 5 6 7 8 9 Draw what happens after performing an extract-min in each binomial heap. Eager Binomial Heap 2 5 9 3 7 6 4 8 Lazy Binomial Heap Order 2 Order 1 Order 0 2 3 5 6 7 8 9 4 Draw what happens after performing an extract-min in each binomial heap. Eager Binomial Heap 2 5 9 3 7 6 4 8 Lazy Binomial Heap Order 2 Order 1 Order 0 3 2 5 6 7 8 9 4 Draw what happens after performing an extract-min in each binomial heap. Eager Binomial Heap 2 5 9 3 7 6 4 8 Lazy Binomial Heap Order 2 Order 1 Order 0 3 2 5 6 7 8 9 4 Draw what happens after performing an extract-min in each binomial heap.
Recommended publications
  • An Alternative to Fibonacci Heaps with Worst Case Rather Than Amortized Time Bounds∗
    Relaxed Fibonacci heaps: An alternative to Fibonacci heaps with worst case rather than amortized time bounds¤ Chandrasekhar Boyapati C. Pandu Rangan Department of Computer Science and Engineering Indian Institute of Technology, Madras 600036, India Email: [email protected] November 1995 Abstract We present a new data structure called relaxed Fibonacci heaps for implementing priority queues on a RAM. Relaxed Fibonacci heaps support the operations find minimum, insert, decrease key and meld, each in O(1) worst case time and delete and delete min in O(log n) worst case time. Introduction The implementation of priority queues is a classical problem in data structures. Priority queues find applications in a variety of network problems like single source shortest paths, all pairs shortest paths, minimum spanning tree, weighted bipartite matching etc. [1] [2] [3] [4] [5] In the amortized sense, the best performance is achieved by the well known Fibonacci heaps. They support delete and delete min in amortized O(log n) time and find min, insert, decrease key and meld in amortized constant time. Fast meldable priority queues described in [1] achieve all the above time bounds in worst case rather than amortized time, except for the decrease key operation which takes O(log n) worst case time. On the other hand, relaxed heaps described in [2] achieve in the worst case all the time bounds of the Fi- bonacci heaps except for the meld operation, which takes O(log n) worst case ¤Please see Errata at the end of the paper. 1 time. The problem that was posed in [1] was to consider if it is possible to support both decrease key and meld simultaneously in constant worst case time.
    [Show full text]
  • Advanced Data Structures
    Advanced Data Structures PETER BRASS City College of New York CAMBRIDGE UNIVERSITY PRESS Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9780521880374 © Peter Brass 2008 This publication is in copyright. Subject to statutory exception and to the provision of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published in print format 2008 ISBN-13 978-0-511-43685-7 eBook (EBL) ISBN-13 978-0-521-88037-4 hardback Cambridge University Press has no responsibility for the persistence or accuracy of urls for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate. Contents Preface page xi 1 Elementary Structures 1 1.1 Stack 1 1.2 Queue 8 1.3 Double-Ended Queue 16 1.4 Dynamical Allocation of Nodes 16 1.5 Shadow Copies of Array-Based Structures 18 2 Search Trees 23 2.1 Two Models of Search Trees 23 2.2 General Properties and Transformations 26 2.3 Height of a Search Tree 29 2.4 Basic Find, Insert, and Delete 31 2.5ReturningfromLeaftoRoot35 2.6 Dealing with Nonunique Keys 37 2.7 Queries for the Keys in an Interval 38 2.8 Building Optimal Search Trees 40 2.9 Converting Trees into Lists 47 2.10
    [Show full text]
  • Arxiv:Submit/0396732
    Dynamic 3-sided Planar Range Queries with Expected Doubly Logarithmic Time⋆ Gerth Stølting Brodal1, Alexis C. Kaporis2, Apostolos N. Papadopoulos4, Spyros Sioutas3, Konstantinos Tsakalidis1, Kostas Tsichlas4 1 MADALGO⋆⋆, Department of Computer Science, Aarhus University, Denmark gerth,tsakalid @madalgo.au.dk 2 { } Computer Engineering and Informatics Department, University of Patras, Greece [email protected] 3 Department of Informatics, Ionian University, Corfu, Greece [email protected] 4 Department of Informatics, Aristotle University of Thessaloniki, Greece [email protected] Abstract. This work studies the problem of 2-dimensional searching for the 3-sided range query of the form [a,b] ( , c] in both main × −∞ and external memory, by considering a variety of input distributions. We present three sets of solutions each of which examines the 3-sided problem in both RAM and I/O model respectively. The presented data structures are deterministic and the expectation is with respect to the input distribution: (1) Under continuous µ-random distributions of the x and y coordinates, we present a dynamic linear main memory solution, which answers 3- sided queries in O(log n + t) worst case time and scales with O(log log n) expected with high probability update time, where n is the current num- ber of stored points and t is the size of the query output. We external- ize this solution, gaining O(logB n + t/B) worst case and O(logB logn) amortized expected with high probability I/Os for query and update operations respectively, where B is the disk block size. (2)Then, we assume that the inserted points have their x-coordinates drawn from a class of smooth distributions, whereas the y-coordinates are arbitrarily distributed.
    [Show full text]
  • Introduction to Algorithms and Data Structures
    Introduction to Algorithms and Data Structures Markus Bläser Saarland University Draft—Thursday 22, 2015 and forever Contents 1 Introduction 1 1.1 Binary search . .1 1.2 Machine model . .3 1.3 Asymptotic growth of functions . .5 1.4 Running time analysis . .7 1.4.1 On foot . .7 1.4.2 By solving recurrences . .8 2 Sorting 11 2.1 Selection-sort . 11 2.2 Merge sort . 13 3 More sorting procedures 16 3.1 Heap sort . 16 3.1.1 Heaps . 16 3.1.2 Establishing the Heap property . 18 3.1.3 Building a heap from scratch . 19 3.1.4 Heap sort . 20 3.2 Quick sort . 21 4 Selection problems 23 4.1 Lower bounds . 23 4.1.1 Sorting . 25 4.1.2 Minimum selection . 27 4.2 Upper bounds for selecting the median . 27 5 Elementary data structures 30 5.1 Stacks . 30 5.2 Queues . 31 5.3 Linked lists . 33 6 Binary search trees 36 6.1 Searching . 38 6.2 Minimum and maximum . 39 6.3 Successor and predecessor . 39 6.4 Insertion and deletion . 40 i ii CONTENTS 7 AVL trees 44 7.1 Bounds on the height . 45 7.2 Restoring the AVL tree property . 46 7.2.1 Rotations . 46 7.2.2 Insertion . 46 7.2.3 Deletion . 49 8 Binomial Heaps 52 8.1 Binomial trees . 52 8.2 Binomial heaps . 54 8.3 Operations . 55 8.3.1 Make heap . 55 8.3.2 Minimum . 55 8.3.3 Union . 56 8.3.4 Insert .
    [Show full text]
  • Exponential Structures for Efficient Cache-Oblivious Algorithms
    Exponential Structures for Efficient Cache-Oblivious Algorithms Michael A. Bender1, Richard Cole2, and Rajeev Raman3 1 Computer Science Department, SUNY Stony Brook, Stony Brook, NY 11794, USA. [email protected]. 2 Computer Science Department, Courant Institute, New York University, New York, NY 10012, USA. [email protected]. 3 Department of Mathematics and Computer Science, University of Leicester, Leicester LE1 7RH, UK. [email protected] Abstract. We present cache-oblivious data structures based upon exponential structures. These data structures perform well on a hierarchical memory but do not depend on any parameters of the hierarchy, including the block sizes and number of blocks at each level. The problems we consider are searching, partial persistence and planar point location. On a hierarchical memory where data is transferred in blocks of size B, some of the results we achieve are: – We give a linear-space data structure for dynamic searching that supports searches and updates in optimal O(logB N) worst-case I/Os, eliminating amortization from the result of Bender, Demaine, and Farach-Colton (FOCS ’00). We also consider finger searches and updates and batched searches. – We support partially-persistent operations on an ordered set, namely, we allow searches in any previous version of the set and updates to the latest version of the set (an update creates a new version of the set). All operations take an optimal O(logB (m+N)) amortized I/Os, where N is the size of the version being searched/updated, and m is the number of versions. – We solve the planar point location problem in linear space, taking optimal O(logB N) I/Os for point location queries, where N is the number of line segments specifying the partition of the plane.
    [Show full text]
  • Lecture 6 — February 9, 2017 1 Wrapping up on Hashing
    CS 224: Advanced Algorithms Spring 2017 Lecture 6 | February 9, 2017 Prof. Jelani Nelson Scribe: Albert Chalom 1 Wrapping up on hashing Recall from las lecture that we showed that when using hashing with chaining of n items into m C log n bins where m = Θ(n) and hash function h :[ut] ! m that if h is from log log n wise family, then no C log n linked list has size > log log n . 1.1 Using two hash functions Azar, Broder, Karlin, and Upfal '99 [1] then used two hash functions h; g :[u] ! [m] that are fully random. Then ball i is inserted in the least loaded of the 2 bins h(i); g(i), and ties are broken randomly. log log n Theorem: Max load is ≤ log 2 + O(1) with high probability. When using d instead of 2 hash log log n functions we get max load ≤ log d + O(1) with high probability. n 1.2 Breaking n into d slots n V¨ocking '03 [2] then considered breaking our n buckets into d slots, and having d hash functions n h1; h2; :::; hd that hash into their corresponding slot of size d . Then insert(i), loads i in the least loaded of the h(i) bins, and breaks tie by putting i in the left most bin. log log n This improves the performance to Max Load ≤ where 1:618 = φ2 < φ3::: ≤ 2. dφd 1.3 Survey: Mitzenmacher, Richa, Sitaraman [3] Idea: Let Bi denote the number of bins with ≥ i balls.
    [Show full text]
  • Data Structures and Programming Spring 2016, Final Exam
    Data Structures and Programming Spring 2016, Final Exam. June 21, 2016 1 1. (15 pts) True or False? (Mark for True; × for False. Score = maxf0, Right - 2 Wrongg. No explanations are needed. (1) Let A1;A2, and A3 be three sorted arrays of n real numbers (all distinct). In the comparison model, constructing a balanced binary search tree of the set A1 [ A2 [ A3 requires Ω(n log n) time. × False (Merge A1;A2;A3 in linear time; then pick recursively the middle as root (in linear time).) (2) Let T be a complete binary tree with n nodes. Finding a path from the root of T to a given vertex v 2 T using breadth-first search takes O(log n) time. × False (Note that the tree is NOT a search tree.) (3) Given an unsorted array A[1:::n] of n integers, building a max-heap out of the elements of A can be performed asymptotically faster than building a red-black tree out of the elements of A. True (O(n) for building a max-heap; Ω(n log n) for building a red-black tree.) (4) In the worst case, a red-black tree insertion requires O(1) rotations. True (See class notes) (5) In the worst case, a red-black tree deletion requires O(1) node recolorings. × False (See class notes) (6) Building a binomial heap from an unsorted array A[1:::n] takes O(n) time. True (See class notes) (7) Insertion into an AVL tree asymptotically beats insertion into an AA-tree. × False (See class notes) (8) The subtree of the root of a red-black tree is always itself a red-black tree.
    [Show full text]
  • Data Structures Binary Trees and Their Balances Heaps Tries B-Trees
    Data Structures Tries Separated chains ≡ just linked lists for each bucket. P [one list of size exactly l] = s(1=m)l(1 − 1=m)n−l. Since this is a Note: Strange structure due to finals topics. Will improve later. Compressing dictionary structure in a tree. General tries can grow l P based on the dictionary, and quite fast. Long paths are inefficient, binomial distribution, easily E[chain length] = (lP [len = l]) = α. Binary trees and their balances so we define compressed tries, which are tries with no non-splitting Expected maximum list length vertices. Initialization O(Σl) for uncompressed tries, O(l + Σ) for compressed (we are only making one initialization of a vertex { the X s j AVL trees P [max len(j) ≥ j) ≤ P [len(i) ≥ j] ≤ m (1=m) last one). i j i Vertices contain -1,0,+1 { difference of balances between the left and T:Assuming all l-length sequences come with uniform probability and we have a sampling of size n, E[c − trie size] = log d. j−1 the right subtree. Balance is just the maximum depth of the subtree. k Q (s − k) P = k=0 (1=m)j−1 ≤ s(s=m)j−1=j!: Analysis: prove by induction that for depth k, there are between Fk P:qd ≡ probability trie has depth d. E[c − triesize] = d d(qd − j! k P and 2 vertices in every tree. Since balance only shifts by 1, it's pretty qd+1) = d qd. simple. Calculate opposite event { trie is within depth d − 1.
    [Show full text]
  • Lecture Notes of CSCI5610 Advanced Data Structures
    Lecture Notes of CSCI5610 Advanced Data Structures Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong July 17, 2020 Contents 1 Course Overview and Computation Models 4 2 The Binary Search Tree and the 2-3 Tree 7 2.1 The binary search tree . .7 2.2 The 2-3 tree . .9 2.3 Remarks . 13 3 Structures for Intervals 15 3.1 The interval tree . 15 3.2 The segment tree . 17 3.3 Remarks . 18 4 Structures for Points 20 4.1 The kd-tree . 20 4.2 A bootstrapping lemma . 22 4.3 The priority search tree . 24 4.4 The range tree . 27 4.5 Another range tree with better query time . 29 4.6 Pointer-machine structures . 30 4.7 Remarks . 31 5 Logarithmic Method and Global Rebuilding 33 5.1 Amortized update cost . 33 5.2 Decomposable problems . 34 5.3 The logarithmic method . 34 5.4 Fully dynamic kd-trees with global rebuilding . 37 5.5 Remarks . 39 6 Weight Balancing 41 6.1 BB[α]-trees . 41 6.2 Insertion . 42 6.3 Deletion . 42 6.4 Amortized analysis . 42 6.5 Dynamization with weight balancing . 43 6.6 Remarks . 44 1 CONTENTS 2 7 Partial Persistence 47 7.1 The potential method . 47 7.2 Partially persistent BST . 48 7.3 General pointer-machine structures . 52 7.4 Remarks . 52 8 Dynamic Perfect Hashing 54 8.1 Two random graph results . 54 8.2 Cuckoo hashing . 55 8.3 Analysis . 58 8.4 Remarks . 59 9 Binomial and Fibonacci Heaps 61 9.1 The binomial heap .
    [Show full text]
  • 18.415 Advanced Algorithms Contents
    18.415 Advanced Algorithms Notes from Fall 2020. Last Updated: November 27, 2020. Contents Introduction 6 1 Lecture 1: Fibonacci Heaps7 1.1 MST problem review.................................7 1.2 Using d-heaps for MST................................8 1.3 Introduction to Fibonacci Heaps...........................8 2 Lecture 2: Fibonacci Heaps, Persistent Data Structs 12 2.1 Fibonacci Heaps Continued.............................. 12 2.2 Applications of Fibonacci Heaps........................... 14 2.2.1 Further Improvements............................ 15 2.3 Introduction to Persistent Data Structures...................... 15 3 Lecture 3: Persistent Data Structs, Splay Trees 16 3.1 Persistent Data Structures Continued........................ 16 3.2 The Planar Point Location Problem......................... 17 3.3 Introduction to Splay Trees.............................. 19 4 Lecture 4: Splay Trees 20 4.1 Heuristics of Splay Trees............................... 20 4.2 Solving the Balancing Problem............................ 20 4.3 Implementation and Analysis of Operations..................... 21 4.4 Results with the Access Lemma........................... 23 5 Lecture 5: Splay Updates, Buckets 25 5.1 Wrapping up Splay Trees............................... 25 5.2 Buckets and Indirect Addressing........................... 26 5.2.1 Improvements................................. 27 5.2.2 Formalization with Tries........................... 27 5.2.3 Laziness wins again.............................. 28 1 6 Lecture 6: VEB queues, Hashing 29 6.1 Improvements
    [Show full text]
  • FIBONACCI HEAPS ‣ Preliminaries ‣ Insert ‣ Extract the Minimum
    Priority queues performance cost summary FIBONACCI HEAPS operation linked list binary heap binomial heap Fibonacci heap † MAKE-HEAP O(1) O(1) O(1) O(1) ‣ preliminaries IS-EMPTY O(1) O(1) O(1) O(1) ‣ insert INSERT O(1) O(log n) O(log n) O(1) ‣ extract the minimum EXTRACT-MIN O(n) O(log n) O(log n) O(log n) ‣ decrease key DECREASE-KEY O(1) O(log n) O(log n) O(1) ‣ bounding the rank ‣ meld and delete DELETE O(1) O(log n) O(log n) O(log n) MELD O(1) O(n) O(log n) O(1) Lecture slides by Kevin Wayne FIND-MIN O(n) O(1) O(log n) O(1) http://www.cs.princeton.edu/~wayne/kleinberg-tardos † amortized Ahead. O(1) INSERT and DECREASE-KEY, O(log n) EXTRACT-MIN. Last updated on Apr 15, 2013 6:59 AM 2 Fibonacci heaps Fibonacci heaps Theorem. [Fredman-Tarjan 1986] Starting from an empty Fibonacci heap, Theorem. [Fredman-Tarjan 1986] Starting from an empty Fibonacci heap, any sequence of m INSERT, EXTRACT-MIN, and DECREASE-KEY operations any sequence of m INSERT, EXTRACT-MIN, and DECREASE-KEY operations involving n INSERT operations takes O(m + n log n) time. involving n INSERT operations takes O(m + n log n) time. this statement is a bit weaker than the actual theorem History. Fibonacci Heaps and Their Uses in Improved Network ・Ingenious data structure and application of amortized analysis. Optimization Algorithms ・Original motivation: improve Dijkstra's shortest path algorithm MICHAEL L.
    [Show full text]
  • Appendix a Sift Optimizations in Van Emde Boas-Based Heap
    Nuno Filipe Monteiro Faria Locality optimizations on irregular algorithms and data structures Dissertação de Mestrado Engenharia Informática Trabalho efectuado sob a orientação do Professor Doutor João Luís Sobral Outubro de 2011 Acknowledgments This dissertation would not have been possible without the aid of my parents and my sister, which have always endured and supported me through my life. Second, I thank my advisor, Professor Jo~aoLu´ısSobral, for giving me the opportunity to join him and his team in research, for fostering my critical thinking, guiding me and sharing his experience and wisdom with me. I would also like to thank the professors I found along my master course for introducing me to the highly motivating theme of high performance comput- ing, Professor Jo~aoLu´ısSobral, Professor Alberto Proen¸ca,Professor Rui Ralha and Professor Ant´onioPina. Last but not least, thank you to all my friends and lab colleagues in university, Roberto Ribeiro, Rui Silva, Rui Gon¸calves, Diogo Mendes, Nuno Silva, for all the brainstorm-ish discus- sions and opinions that allowed me to improve and push myself further - a few themes presented in this dissertation were discussed in some of those brainstorms. To my colleagues/room-mates through my graduate years with whom I shared unforgettable experiences, Nuno Silva, Pedro Silva, Emanuel Gon¸calves, Andr´eF´elix,Samuel Moreira, Roberto Ribeiro and Pedro Miranda - I believe I have made friends for life. Nuno Filipe Monteiro Faria The project that served as ground for this dissertation was funded under the agreement of the protocol between FCT and UTAustin Parallel Programming Refinements for Irregular Applications (UTAustin/CA/0056/2008) ii Otimiza¸c~oesde localidade em algoritmos e estruturas de dados irregulares Resumo Estruturas de grafos baseadas em apontadores t^emsido amplamente discutidas por v´ariasco- munidades cient´ıficas que tencionam usar este tipo de estruturas.
    [Show full text]