An Alternative to Fibonacci Heaps with Worst Case Rather Than Amortized Time Bounds∗

Total Page:16

File Type:pdf, Size:1020Kb

Load more

Relaxed Fibonacci heaps: An alternative to Fibonacci heaps with worst case rather than amortized time bounds

  • Chandrasekhar Boyapati
  • C. Pandu Rangan

Department of Computer Science and Engineering Indian Institute of Technology, Madras 600036, India
Email: [email protected]

November 1995

Introduction

The implementation of priority queues is a classical problem in data structures. Priority queues find applications in a variety of network problems like single source shortest paths, all pairs shortest paths, minimum spanning tree, weighted bipartite matching etc. [1] [2] [3] [4] [5]
In the amortized sense, the best performance is achieved by the well known
Fibonacci heaps. They support delete and delete min in amortized (log ) time and find min, insert, decrease key and meld in amortized constant time.
Fast meldable priority queues described in [1] achieve all the above time bounds in worst case rather than amortized time, except for the decrease key operation which takes (log ) worst case time. On the other hand, relaxed heaps described in [2] achieve in the worst case all the time bounds of the Fibonacci heaps except for the meld operation, which takes (log ) worst case

1time. The problem that was posed in [1] was to consider if it is possible to support both decrease key and meld simultaneously in constant worst case time.
In this paper, we solve this open problem by presenting relaxed Fibonacci heaps as a new priority queue data structure for a Random Access Machine (RAM). (The new data structure is primarily designed by relaxing some of the constraints in Fibonacci heaps, hence the name relaxed Fibonacci heaps.) Our data structure supports the operations find minimum, insert, decrease key and meld, each in (1) worst case time and delete and delete min in (log ) worst case time. The following table summarizes the discussion so far.

For simplicity we assume that all priority queues have at least three elements.

  • We use the symbol
  • to denote a relaxed Fibonacci heap and to denote the

size of a priority queue it represents. Unless otherwise mentioned, all the time bounds we state are for the worst case.
In Section 1, we prove some results regarding binomial trees which will be central to establishing the (log ) time bound for the delete min operation. In Section 2.1, we describe the relaxed Fibonacci heaps. In Section 3, we describe the various operations on relaxed Fibonacci heaps.

1 Some results regarding binomial trees

Consider the following problem. We are given a binomial tree has degree . The children of any node in are arranged in the increasing are 1, ... −1, then
[5] whose root order of their degrees. That is, if the children of
= .
,

0

We are to remove some nodes from this tree. Every time a node gets removed, the entire subtree rooted at that node also gets removed. Suppose the resulting

0

tree is 0, which is not necessarily a binomial tree. For any node

,denotes the number of children lost by . For any node

  • 0, define
  • as

2

−1

  • .
  • the weight of node

Define weight
Given an upper bound on weight

  • as follows:
  • = 0, if
  • = 0. Else,
  • = 2

0

  • .
  • =
  • for all nodes

0

  • that
  • can lose, let
  • be the tree

obtained by removing as many nodes from as possible.

0

Lemma 1

defined above has the following properties:

0

  • 1. Let
  • be any node in 0. If

= 0.

  • = 0 and if
  • is a descendant of
  • then

0
0

  • 2. Let
  • be any node in
  • such that
  • = . Then the children lost by

are its last children (which had the highest degrees in ).

  • 3. Let
  • and
  • be two nodes that have the same parent such that

0

  • , in . Then, in
  • ,

.

  • 4. Let a node
  • have four consecutive children
  • ,

=

  • ,
  • and

=

be-

  • +3
  • +1
  • +2

longing to 0. Then it cannot be true that

=

  • +1
  • +2

0.

+3

Proof: If any of the 4 statements above are violated, then we can easily show that by reorganizing the removal of nodes from , we can increase the number of

  • nodes removed from without increasing the weight
  • removed. In particular, if

statement 4 is violated, then we can reorganize the removal of nodes by increasing

  • by one and decreasing
  • and
  • by one each.

  • +3
  • +1

Lemma 2 Let

be a binomial tree whose root has degree . Let

  • 4
  • 0 + 3,

where loose be 2 . Then,
= dlog 0e, for some 0. Let the upper bound on the weight that can

0

0

  • will have more than
  • nodes in it.

  • 0
  • 0

  • Proof: Let the children of the root of be
  • ,
  • 1, ..., −1. We first claim

0

  • that
  • = 0. Or else, it follows trivially from the previous lemma that

0

+ 2, because statement 4 implies that there can be at most

  • 4
  • 0+3
  • 0

three consecutive nodes which have lost the same number of children. This in

0+1

turn implies that the weight lost by is greater than 2 possible. Thus our claim holds.
≥ 2 , which is not

0

  • Since
  • = 0, no nodes are deleted from the subtree rooted at
  • ,

  • 0
  • 0

according to statement 1 of the previous lemma. Thus the number of nodes in

0

0

  • is greater than 2 ≥
  • .

0

2 The relaxed Fibonacci heaps

Our basic representation of a priority queue is a heap ordered tree where each node contains one element. This is slightly different from binomial heaps [4] and Fibonacci heaps [3] where the representation is a forest of heap ordered trees.

3

2.1 Properties of the relaxed Fibonacci heaps

We partition the children of a node into two types, type I and type II. A relaxed Fibonacci heap must satisfy the following constraints.

1. A node of type I has at most one child of type II. A node of type II cannot have any children of type II.

  • 2. With each node we associate a field
  • which denotes the number of

children of type I that has. (Thus the number of children of any node

  • is either
  • + 1 or
  • depending on whether
  • has a child of type

II or not.)

0

  • (a) The root is of type I and has
  • zero.
  • has a child
  • of type

and let
II.

  • 0
  • 0

  • (b) Let
  • = . Let the children of
  • be
  • ,
  • 1, ...,

  • 0
  • −1

0

  • . Then,
  • =

  • 0
  • 1
  • −1

+ 1.

−1

  • 3. With each node
  • of type I we associate a field
  • which denotes the

number of children of type I lost by zero.

  • since its
  • field was last reset to

  • For any node
  • of type I in , define
  • as the weight of node
  • as

−1

  • follows:
  • = 0, if
  • = 0. Else,
  • = 2
  • .

Also for any node losing its last child. That is,

  • of type I, define
  • as the in2crease in
  • due to

  • respectively depending
  • = 0 or 1 or 2

  • on whether
  • = 0 or 1 or greater than one.

  • Define weight
  • =
  • for all of type I in
  • .

Every relaxed Fibonacci heap has a special variable , which is equal to one of the nodes of the tree. Initially, = .

  • (a)
  • =
  • =
  • =
  • =
  • = 0.

=

  • 0
  • 1
  • −1

(b) Let of be any node of type I. Let of type I be and let the children

  • ,
  • 1, ...,
  • −1. Then, for any
  • ,
  • +

0

≥ .

(c)

  • +
  • .

4. Associated with type II in

  • we have a list
  • = (
  • ) of all0 nodes of

  • 1
  • 2

  • other than 0. Each node
  • was originally the

till some meld operation. Let just before that meld operation. of some relaxed Fibonacci heap number of nodes in denote the
(a) (b)
≤ 4dlog e + 4.
+ ≤ .

4

R

25

R’
R2

8

R1 13

15
17 R0

  • 25
  • 21,1

24,1

  • 12,1
  • 9

P

14 M1
30

  • L
  • = ( M1 )

M

20 23

Figure 1: A relaxed Fibonacci heap

Example

Figure 1 shows a relaxed Fibonacci heap. The nodes of type I are represented by circles and the nodes of type II are represented by squares. Each node contains either and the list

  • or just
  • if
  • = 0. The node

are also shown.

Lemma 3 The heap order implies that the minimum element is at the root.

  • Lemma 4 For any node
  • of type I,
  • ≤ 4 0 + 3, where

+ 3. If
= dlog e.

0

  • Proof: On the contrary, say
  • =

= 2

  • 4
  • 1, then
  • ,

0

  • when expanded, will contain the term
  • −1. Thu2s according to property

−1

3c, 2

  • ≤ +

  • = +2
  • −2. That is, 2
  • ≤ . Hence,
  • ≤ 2 .

This inequality obviously holds true even if the subtree rooted at , say , is not more than 2 .
Let us now try to estimate the minimum number of nodes present in us first remove all the children of type II in and their descendants. In the process, we might be decreasing the number of nodes in the tree but we will not be increasing
≤ 1. Thus the weight lost by
. Let

.
In the resulting 0 , we know from property 3b that for any ( + 1) child

  • 0
  • 0

  • of any node
  • ,
  • +

≥ . But for the binomial tree problem

  • 0
  • 0

described in Section 1, nodes present in

  • +
  • = . Thus the minimum number of

is at least equal to the minimum number of nodes present in

5a binomial tree of degree after it loses a weight less than or equal to 2 . But according to Lemma 2, this is more than . Thus, which is obviously not possible. has more than nodes,

  • Lemma 5 For any node
  • in , the number of children of
  • ≤ 4 0 + 4, where

= dlog e.

0

Proof: If

is a node of type I, the number of children of

+ 1 ≤

  • 4
  • 0 + 4, according to the0 previous lemma.

  • From property 2b,
  • =

+ 1 ≤ 4 + 4, according

  • −1
  • 0

  • to the previous lemma, since
  • is of type I. Thus, number of children of

−1

  • 0
  • 0

=If
≤ 4 0 + 4.

0

  • is any node of type II other than
  • ,
  • is equal to some
  • in
  • ,

  • according to property 4. According to property 4a,
  • ≤ 4dlog e + 4.

  • But according to property 4b,
  • . Thus the number of children of
  • =

≤ 4 0 + 4.

Remarks

The restrictions imposed by property 3c are much weaker than those in fast meldable queues [1] or in relaxed heaps [2]. But according to the above lemma, the number of children of any node is still (log ). We believe that the introduction of property 3c is the most important contribution of this paper.

2.2 Representation

The representation is similar to that of Fibonacci heaps as described in [5]. The children of type I of every node are stored in a doubly linked list, sorted according to their degrees. Besides, each node of type I has an additional child pointer which can point to a child of type II.

0

To preserve the sorted order, every node that is inserted as a child of must be inserted at the appropriate place. To achieve this in constant time,0 we maintain an auxiliary array degree .

  • such that [ ] points to the first child of
  • of

0

  • We will also need to identify two children of
  • of same degree, if they exist.

  • To achieve this in constant time, we maintain a linked list
  • of pairs of nodes

such that of degree is even.

0

  • that are children of
  • and have same degree and a boolean array

0

[ ] is true if and only if the number of children of Besides, we will also require to identify some node in such that
1, if such a node exists. To implement this in constant time, we maintain a list of all nodes in such that 1.

6

3 Operations on relaxed Fibonacci heaps

In this section, we will describe how to implement the various operations on relaxed Fibonacci heaps. But before we do that, we will describe a few basic operations namely, link, add, reinsert and adjust.
Though we will not be mentioning it explicitly every time, we will assume that whenever a node of type I loses a child, its by one unless the node is or a child of 0. Similarly, whenever a node is inserted

  • a child of 0, its
  • field is automatically reset to zero. We also assume that if

node gets deleted from the tree after a delete min operation, then is reset field is automatically incremented

  • to to ensure that node still belongs to
  • .

3.1 Some basic operations

The link operation is similar to the linking of trees in binomial heaps and Fibonacci heaps.

  • Algorithm 1 : Link( ,
  • ,
  • )

  • /* Link the two trees rooted at
  • and
  • into one tree */

0

  • /*
  • and
  • are children of
  • and have equal degrees, say */

  • 1. Delete the subtrees rooted at
  • and
  • from

2. If

then

  • (
  • )

3. Make 4. Make the last child of (whose

0

  • now = + 1) a child of
  • of

  • Algorithm 2 : Add( ,
  • )

  • /* Add the tree rooted at
  • to the relaxed Fibonacci heap */

0

  • 1. Make
  • a child of
  • of

  • 0
  • 0

2. If

then
0 then

((
))

  • 0
  • 0

3. If 4. If among the children of such that

  • there exist any two different nodes
  • and

=

then

  • (
  • )

Recommended publications
  • Priority Queues and Binary Heaps Chapter 6.5

    Priority Queues and Binary Heaps Chapter 6.5

    Priority Queues and Binary Heaps Chapter 6.5 1 Some animals are more equal than others • A queue is a FIFO data structure • the first element in is the first element out • which of course means the last one in is the last one out • But sometimes we want to sort of have a queue but we want to order items according to some characteristic the item has. 107 - Trees 2 Priorities • We call the ordering characteristic the priority. • When we pull something from this “queue” we always get the element with the best priority (sometimes best means lowest). • It is really common in Operating Systems to use priority to schedule when something happens. e.g. • the most important process should run before a process which isn’t so important • data off disk should be retrieved for more important processes first 107 - Trees 3 Priority Queue • A priority queue always produces the element with the best priority when queried. • You can do this in many ways • keep the list sorted • or search the list for the minimum value (if like the textbook - and Unix actually - you take the smallest value to be the best) • You should be able to estimate the Big O values for implementations like this. e.g. O(n) for choosing the minimum value of an unsorted list. • There is a clever data structure which allows all operations on a priority queue to be done in O(log n). 107 - Trees 4 Binary Heap Actually binary min heap • Shape property - a complete binary tree - all levels except the last full.
  • Assignment 3: Kdtree ______Due June 4, 11:59 PM

    Assignment 3: Kdtree ______Due June 4, 11:59 PM

    CS106L Handout #04 Spring 2014 May 15, 2014 Assignment 3: KDTree _________________________________________________________________________________________________________ Due June 4, 11:59 PM Over the past seven weeks, we've explored a wide array of STL container classes. You've seen the linear vector and deque, along with the associative map and set. One property common to all these containers is that they are exact. An element is either in a set or it isn't. A value either ap- pears at a particular position in a vector or it does not. For most applications, this is exactly what we want. However, in some cases we may be interested not in the question “is X in this container,” but rather “what value in the container is X most similar to?” Queries of this sort often arise in data mining, machine learning, and computational geometry. In this assignment, you will implement a special data structure called a kd-tree (short for “k-dimensional tree”) that efficiently supports this operation. At a high level, a kd-tree is a generalization of a binary search tree that stores points in k-dimen- sional space. That is, you could use a kd-tree to store a collection of points in the Cartesian plane, in three-dimensional space, etc. You could also use a kd-tree to store biometric data, for example, by representing the data as an ordered tuple, perhaps (height, weight, blood pressure, cholesterol). However, a kd-tree cannot be used to store collections of other data types, such as strings. Also note that while it's possible to build a kd-tree to hold data of any dimension, all of the data stored in a kd-tree must have the same dimension.
  • Advanced Data Structures

    Advanced Data Structures

    Advanced Data Structures PETER BRASS City College of New York CAMBRIDGE UNIVERSITY PRESS Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK Published in the United States of America by Cambridge University Press, New York www.cambridge.org Information on this title: www.cambridge.org/9780521880374 © Peter Brass 2008 This publication is in copyright. Subject to statutory exception and to the provision of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. First published in print format 2008 ISBN-13 978-0-511-43685-7 eBook (EBL) ISBN-13 978-0-521-88037-4 hardback Cambridge University Press has no responsibility for the persistence or accuracy of urls for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate. Contents Preface page xi 1 Elementary Structures 1 1.1 Stack 1 1.2 Queue 8 1.3 Double-Ended Queue 16 1.4 Dynamical Allocation of Nodes 16 1.5 Shadow Copies of Array-Based Structures 18 2 Search Trees 23 2.1 Two Models of Search Trees 23 2.2 General Properties and Transformations 26 2.3 Height of a Search Tree 29 2.4 Basic Find, Insert, and Delete 31 2.5ReturningfromLeaftoRoot35 2.6 Dealing with Nonunique Keys 37 2.7 Queries for the Keys in an Interval 38 2.8 Building Optimal Search Trees 40 2.9 Converting Trees into Lists 47 2.10
  • Rethinking Host Network Stack Architecture Using a Dataflow Modeling Approach

    Rethinking Host Network Stack Architecture Using a Dataflow Modeling Approach

    DISS.ETH NO. 23474 Rethinking host network stack architecture using a dataflow modeling approach A thesis submitted to attain the degree of DOCTOR OF SCIENCES of ETH ZURICH (Dr. sc. ETH Zurich) presented by Pravin Shinde Master of Science, Vrije Universiteit, Amsterdam born on 15.10.1982 citizen of Republic of India accepted on the recommendation of Prof. Dr. Timothy Roscoe, examiner Prof. Dr. Gustavo Alonso, co-examiner Dr. Kornilios Kourtis, co-examiner Dr. Andrew Moore, co-examiner 2016 Abstract As the gap between the speed of networks and processor cores increases, the software alone will not be able to handle all incoming data without additional assistance from the hardware. The network interface controllers (NICs) evolve and add supporting features which could help the system increase its scalability with respect to incoming packets, provide Quality of Service (QoS) guarantees and reduce the CPU load. However, modern operating systems are ill suited to both efficiently exploit and effectively manage the hardware resources of state-of-the-art NICs. The main problem is the layered architecture of the network stack and the rigid interfaces. This dissertation argues that in order to effectively use the diverse and complex NIC hardware features, we need (i) a hardware agnostic representation of the packet processing capabilities of the NICs, and (ii) a flexible interface to share this information with different layers of the network stack. This work presents the Dataflow graph based model to capture both the hardware capabilities for packet processing of the NIC and the state of the network stack, in order to enable automated reasoning about the NIC features in a hardware-agnostic way.
  • Priorityqueue

    Priorityqueue

    CSE 373 Java Collection Framework, Part 2: Priority Queue, Map slides created by Marty Stepp http://www.cs.washington.edu/373/ © University of Washington, all rights reserved. 1 Priority queue ADT • priority queue : a collection of ordered elements that provides fast access to the minimum (or maximum) element usually implemented using a tree structure called a heap • priority queue operations: add adds in order; O(log N) worst peek returns minimum value; O(1) always remove removes/returns minimum value; O(log N) worst isEmpty , clear , size , iterator O(1) always 2 Java's PriorityQueue class public class PriorityQueue< E> implements Queue< E> Method/Constructor Description Runtime PriorityQueue< E>() constructs new empty queue O(1) add( E value) adds value in sorted order O(log N ) clear() removes all elements O(1) iterator() returns iterator over elements O(1) peek() returns minimum element O(1) remove() removes/returns min element O(log N ) Queue<String> pq = new PriorityQueue <String>(); pq.add("Stuart"); pq.add("Marty"); ... 3 Priority queue ordering • For a priority queue to work, elements must have an ordering in Java, this means implementing the Comparable interface • many existing types (Integer, String, etc.) already implement this • if you store objects of your own types in a PQ, you must implement it TreeSet and TreeMap also require Comparable types public class Foo implements Comparable<Foo> { … public int compareTo(Foo other) { // Return > 0 if this object is > other // Return < 0 if this object is < other // Return 0 if this object == other } } 4 The Map ADT • map : Holds a set of unique keys and a collection of values , where each key is associated with one value.
  • Programmatic Testing of the Standard Template Library Containers

    Programmatic Testing of the Standard Template Library Containers

    Programmatic Testing of the Standard Template Library Containers y z Jason McDonald Daniel Ho man Paul Stro op er May 11, 1998 Abstract In 1968, McIlroy prop osed a software industry based on reusable comp onents, serv- ing roughly the same role that chips do in the hardware industry. After 30 years, McIlroy's vision is b ecoming a reality. In particular, the C++ Standard Template Library STL is an ANSI standard and is b eing shipp ed with C++ compilers. While considerable attention has b een given to techniques for developing comp onents, little is known ab out testing these comp onents. This pap er describ es an STL conformance test suite currently under development. Test suites for all of the STL containers have b een written, demonstrating the feasi- bility of thorough and highly automated testing of industrial comp onent libraries. We describ e a ordable test suites that provide go o d co de and b oundary value coverage, including the thousands of cases that naturally o ccur from combinations of b oundary values. We showhowtwo simple oracles can provide fully automated output checking for all the containers. We re ne the traditional categories of black-b ox and white-b ox testing to sp eci cation-based, implementation-based and implementation-dep endent testing, and showhow these three categories highlight the key cost/thoroughness trade- o s. 1 Intro duction Our testing fo cuses on container classes |those providing sets, queues, trees, etc.|rather than on graphical user interface classes. Our approach is based on programmatic testing where the number of inputs is typically very large and b oth the input generation and output checking are under program control.
  • Introduction to Algorithms and Data Structures

    Introduction to Algorithms and Data Structures

    Introduction to Algorithms and Data Structures Markus Bläser Saarland University Draft—Thursday 22, 2015 and forever Contents 1 Introduction 1 1.1 Binary search . .1 1.2 Machine model . .3 1.3 Asymptotic growth of functions . .5 1.4 Running time analysis . .7 1.4.1 On foot . .7 1.4.2 By solving recurrences . .8 2 Sorting 11 2.1 Selection-sort . 11 2.2 Merge sort . 13 3 More sorting procedures 16 3.1 Heap sort . 16 3.1.1 Heaps . 16 3.1.2 Establishing the Heap property . 18 3.1.3 Building a heap from scratch . 19 3.1.4 Heap sort . 20 3.2 Quick sort . 21 4 Selection problems 23 4.1 Lower bounds . 23 4.1.1 Sorting . 25 4.1.2 Minimum selection . 27 4.2 Upper bounds for selecting the median . 27 5 Elementary data structures 30 5.1 Stacks . 30 5.2 Queues . 31 5.3 Linked lists . 33 6 Binary search trees 36 6.1 Searching . 38 6.2 Minimum and maximum . 39 6.3 Successor and predecessor . 39 6.4 Insertion and deletion . 40 i ii CONTENTS 7 AVL trees 44 7.1 Bounds on the height . 45 7.2 Restoring the AVL tree property . 46 7.2.1 Rotations . 46 7.2.2 Insertion . 46 7.2.3 Deletion . 49 8 Binomial Heaps 52 8.1 Binomial trees . 52 8.2 Binomial heaps . 54 8.3 Operations . 55 8.3.1 Make heap . 55 8.3.2 Minimum . 55 8.3.3 Union . 56 8.3.4 Insert .
  • Lecture 6 — February 9, 2017 1 Wrapping up on Hashing

    Lecture 6 — February 9, 2017 1 Wrapping up on Hashing

    CS 224: Advanced Algorithms Spring 2017 Lecture 6 | February 9, 2017 Prof. Jelani Nelson Scribe: Albert Chalom 1 Wrapping up on hashing Recall from las lecture that we showed that when using hashing with chaining of n items into m C log n bins where m = Θ(n) and hash function h :[ut] ! m that if h is from log log n wise family, then no C log n linked list has size > log log n . 1.1 Using two hash functions Azar, Broder, Karlin, and Upfal '99 [1] then used two hash functions h; g :[u] ! [m] that are fully random. Then ball i is inserted in the least loaded of the 2 bins h(i); g(i), and ties are broken randomly. log log n Theorem: Max load is ≤ log 2 + O(1) with high probability. When using d instead of 2 hash log log n functions we get max load ≤ log d + O(1) with high probability. n 1.2 Breaking n into d slots n V¨ocking '03 [2] then considered breaking our n buckets into d slots, and having d hash functions n h1; h2; :::; hd that hash into their corresponding slot of size d . Then insert(i), loads i in the least loaded of the h(i) bins, and breaks tie by putting i in the left most bin. log log n This improves the performance to Max Load ≤ where 1:618 = φ2 < φ3::: ≤ 2. dφd 1.3 Survey: Mitzenmacher, Richa, Sitaraman [3] Idea: Let Bi denote the number of bins with ≥ i balls.
  • Data Structures and Programming Spring 2016, Final Exam

    Data Structures and Programming Spring 2016, Final Exam

    Data Structures and Programming Spring 2016, Final Exam. June 21, 2016 1 1. (15 pts) True or False? (Mark for True; × for False. Score = maxf0, Right - 2 Wrongg. No explanations are needed. (1) Let A1;A2, and A3 be three sorted arrays of n real numbers (all distinct). In the comparison model, constructing a balanced binary search tree of the set A1 [ A2 [ A3 requires Ω(n log n) time. × False (Merge A1;A2;A3 in linear time; then pick recursively the middle as root (in linear time).) (2) Let T be a complete binary tree with n nodes. Finding a path from the root of T to a given vertex v 2 T using breadth-first search takes O(log n) time. × False (Note that the tree is NOT a search tree.) (3) Given an unsorted array A[1:::n] of n integers, building a max-heap out of the elements of A can be performed asymptotically faster than building a red-black tree out of the elements of A. True (O(n) for building a max-heap; Ω(n log n) for building a red-black tree.) (4) In the worst case, a red-black tree insertion requires O(1) rotations. True (See class notes) (5) In the worst case, a red-black tree deletion requires O(1) node recolorings. × False (See class notes) (6) Building a binomial heap from an unsorted array A[1:::n] takes O(n) time. True (See class notes) (7) Insertion into an AVL tree asymptotically beats insertion into an AA-tree. × False (See class notes) (8) The subtree of the root of a red-black tree is always itself a red-black tree.
  • Data Structures Binary Trees and Their Balances Heaps Tries B-Trees

    Data Structures Binary Trees and Their Balances Heaps Tries B-Trees

    Data Structures Tries Separated chains ≡ just linked lists for each bucket. P [one list of size exactly l] = s(1=m)l(1 − 1=m)n−l. Since this is a Note: Strange structure due to finals topics. Will improve later. Compressing dictionary structure in a tree. General tries can grow l P based on the dictionary, and quite fast. Long paths are inefficient, binomial distribution, easily E[chain length] = (lP [len = l]) = α. Binary trees and their balances so we define compressed tries, which are tries with no non-splitting Expected maximum list length vertices. Initialization O(Σl) for uncompressed tries, O(l + Σ) for compressed (we are only making one initialization of a vertex { the X s j AVL trees P [max len(j) ≥ j) ≤ P [len(i) ≥ j] ≤ m (1=m) last one). i j i Vertices contain -1,0,+1 { difference of balances between the left and T:Assuming all l-length sequences come with uniform probability and we have a sampling of size n, E[c − trie size] = log d. j−1 the right subtree. Balance is just the maximum depth of the subtree. k Q (s − k) P = k=0 (1=m)j−1 ≤ s(s=m)j−1=j!: Analysis: prove by induction that for depth k, there are between Fk P:qd ≡ probability trie has depth d. E[c − triesize] = d d(qd − j! k P and 2 vertices in every tree. Since balance only shifts by 1, it's pretty qd+1) = d qd. simple. Calculate opposite event { trie is within depth d − 1.
  • Readings Findmin Problem Priority Queue

    Readings Findmin Problem Priority Queue

    Readings • Chapter 6 Priority Queues & Binary Heaps › Section 6.1-6.4 CSE 373 Data Structures Winter 2007 Binary Heaps 2 FindMin Problem Priority Queue ADT • Quickly find the smallest (or highest priority) • Priority Queue can efficiently do: item in a set ›FindMin() • Applications: • Returns minimum value but does not delete it › Operating system needs to schedule jobs according to priority instead of FIFO › DeleteMin( ) › Event simulation (bank customers arriving and • Returns minimum value and deletes it departing, ordered according to when the event › Insert (k) happened) • In GT Insert (k,x) where k is the key and x the value. In › Find student with highest grade, employee with all algorithms the important part is the key, a highest salary etc. “comparable” item. We’ll skip the value. › Find “most important” customer waiting in line › size() and isEmpty() Binary Heaps 3 Binary Heaps 4 List implementation of a Priority BST implementation of a Priority Queue Queue • What if we use unsorted lists: • Worst case (degenerate tree) › FindMin and DeleteMin are O(n) › FindMin, DeleteMin and Insert (k) are all O(n) • In fact you have to go through the whole list • Best case (completely balanced BST) › Insert(k) is O(1) › FindMin, DeleteMin and Insert (k) are all O(logn) • What if we used sorted lists • Balanced BSTs › FindMin and DeleteMin are O(1) › FindMin, DeleteMin and Insert (k) are all O(logn) • Be careful if we want both Min and Max (circular array or doubly linked list) › Insert(k) is O(n) Binary Heaps 5 Binary Heaps 6 1 Better than a speeding BST Binary Heaps • A binary heap is a binary tree (NOT a BST) that • Can we do better than Balanced Binary is: Search Trees? › Complete: the tree is completely filled except • Very limited requirements: Insert, possibly the bottom level, which is filled from left to FindMin, DeleteMin.
  • Chapter C4: Heaps and Binomial / Fibonacci Heaps

    Chapter C4: Heaps and Binomial / Fibonacci Heaps

    Chapter C4: Heaps and Binomial / Fibonacci Heaps C4.1 Heaps A heap is the standard implementation of a priority queue. A min-heap stores values in nodes of a tree such that heap-order property: for each node, its value is smaller than or equal to its children’s So the minimum is on top. (A max-heap can be defined similarly.) The standard heap uses a binary tree. Specifically, it uses a complete binary tree, which we define here to be a binary tree where each level except the last is full, and in the last level nodes are added left to right. Here is an example: 7 24 19 25 56 68 40 29 31 58 Now, any binary tree can be stored in an array using level numbering. The root is in cell 0, cells 1 and cells 2 are for its children, cells 3 through 6 are for its grandchildren, and so on. Note that this means a nice formula: if a node is in cell x,thenitschildren are in 2x+1 and 2x+2. Such storage can be (very) wasteful for a general binary tree; but the definition of complete binary tree means the standard heap is stored in consecutive cells of the array. C4.2 Heap Operations The idea for insertion is to: Add as last leaf, then bubble up value until heap-order property re-established. Insert(v) add v as next leaf while v<parent(v) { swapElements(v,parent(v)) v=parent(v) } One uses a “hole” to reduce data movement. Here is an example of inserting the value 12: 7 7 24 19 12 19 ) 25 56 68 40 25 24 68 40 29 31 58 12 29 31 58 56 The idea for extractMin is to: Replace with value from last leaf, delete last leaf, and bubble down value until heap-order property re-established.