1/9/2014

Some Terminologies

Binary Trees, Binary Search Trees

www.cs.ust.hk/~huamin/COMP17 1/bst. ppt • Child and parent – Every node except the root has one parent – A node can have an arbitrary number of children • Leaves – Nodes with no children • Sibling – nodes with same parent

Trees Some Terminologies

• Linear access time of linked lists is • Path prohibitive • Length – number of edges on the path – Does there exist any simple data structure • Depth of a node for which the running time of most – length of the unique path from the root to that node operations (search, insert, delete) is O(log – The depth of a is equal to the depth of the deepest leaf N)? • Height of a node – length of the longest path from that node to a leaf – all leaves are at height 0 – The height of a tree is equal to the height of the root • Ancestor and descendant – Proper ancestor and proper descendant

Trees Example: UNIX Directory

• A tree is a collection of nodes – The collection can be empty – (recursive definition) If not empty, a tree consists of a distinguished node r (the root ), and zero or more nonempty subtrees

T1, T 2, ...., T k, each of whose roots are connected by a directed edge from r

1 1/9/2014

Binary Trees Preorder, Postorder and Inorder • A tree in which no node can have more than two children • Preorder traversal – node, left, right – prefix expression • ++a*bc*+*defg

• The depth of an “average” is considerably smaller than N, eventhough in the worst case, the depth can be as large as N – 1.

Example: Expression Trees Preorder, Postorder and Inorder

• Postorder traversal – left, right, node – postfix expression • abc*+de*f+g*+

• Inorder traversal – left, node, right.

• Leaves are operands (constants or variables) – infix expression • The other nodes (internal nodes) contain operators • a+b*c+d*e+f*g • Will not be a binary tree if some operators are not binary

Tree traversal • Preorder • Used to print out the data in a tree in a certain order • Pre-order traversal – Print the data at the root – Recursively print out all data in the left subtree – Recursively print out all data in the right subtree

2 1/9/2014

compare: Implementation of a general tree • Postorder

Preorder, Postorder and Inorder Binary Search Trees • Stores keys in the nodes in a way so that searching, insertion and deletion can be done efficiently. Binary property – For every node X, all the keys in its left subtree are smaller than the key value in X, and all the keys in its right subtree are larger than the key value in X

Binary Trees Binary Search Trees

• Possible operations on the Binary Tree ADT – parent – left_child, right_child – sibling – root, etc • Implementation – Because a binary tree has at most two children, we can keep direct pointers to them A Not a binary search tree

3 1/9/2014

Binary search trees Two binary search trees representing the same set:

• Average depth of a node is O(log N); maximum depth of a node is O(N)

Implementation Searching (Find)

• Find X: return a pointer to the node that has key X, or NULL if there is no such node

• Time complexity – O(height of the tree)

Searching BST Inorder traversal of BST

• If we are searching for 15, then we are done. • Print out all the keys in sorted order • If we are searching for a key < 15, then we should search in the left subtree. • If we are searching for a key > 15, then we should search in the right subtree.

Inorder: 2, 3, 4, 6, 7, 9, 13, 15, 17, 18, 20

4 1/9/2014

findMin/ findMax delete

• Return the node containing the smallest element in Three cases: the tree (1) the node is a leaf • Start at the root and go left as long as there is a left – Delete it immediately child. The stopping point is the smallest element (2) the node has one child – Adjust a pointer from the parent to bypass that node

• Similarly for findMax • Time complexity = O(height of the tree)

insert delete • Proceed down the tree as you would with a find (3) the node has 2 children • If X is found, do nothing (or update something) – replace the key of that node with the minimum element at the • Otherwise, insert X at the last spot on the path traversed right subtree – delete the minimum element • Has either no child or only right child because if it has a left child, that left child would be smaller and would have been chosen. So invoke case 1 or 2.

• Time complexity = O(height of the tree) • Time complexity = O(height of the tree)

delete

• When we delete a node, we need to consider how we take care of the children of the deleted node. Priority Queues – Binary – This has to be done such that the property Heaps of the search tree is maintained. homes.cs.washington.edu/~anderson/ iucee/Slides_326.../ Heaps .ppt

30

5 1/9/2014

Applications of the Recall Queues • Select print jobs in order of decreasing length • FIFO: First-In, First-Out • Forward packets on routers in order of urgency • Some contexts where this seems right? • Select most frequent symbols for compression • Sort numbers, picking minimum first • Some contexts where some things should be allowed to skip ahead in the • Anything greedy line?

31 34

Queues that Allow Line Jumping Potential Implementations • Need a new ADT • Operations: Insert an Item, Remove the “Best” Item insert deleteMin Unsorted list (Array) O(1) O(n) Unsorted list (Linked-List) O(1) O(n)

6 2 Sorted list (Array) O(n) O(1)* 15 23 insert deleteMin 12 18 Sorted list (Linked-List) O(n) O(1) 45 3 7

32 35

Priority Queue ADT Recall From Lists, Queues, 1. PQueue data : collection of data with Stacks priority • Use an ADT that corresponds to your needs 2. PQueue operations • The right ADT is efficient, while an overly – insert general ADT provides functionality you – deleteMin aren’t using, but are paying for anyways

3. PQueue property : for two elements in the • Heaps provide O(log n) worst case for queue, x and y, if x has a lower priority both insert and deleteMin, O(1) average insert value than y, x will be deleted before y 33 36

6 1/9/2014

Brief interlude: Some Definitions: Binary Properties A Perfect binary tree – A binary tree with all leaf nodes at the same depth. All 1. Structure Property internal nodes have 2 children. 2. Ordering Property height h h+1 11 2 – 1 nodes 2h – 1 non-leaves 5 21 2h leaves 2 9 16 25 1 3 7 10 13 19 22 30

37 40

Heap Structure Property Tree Review • A is a complete binary tree. Tree T Complete binary tree – binary tree that is A completely filled, with the possible exception root (T): of the bottom level, which is filled left to right. leaves (T): B C Examples : children (B) : parent (H) : D E F G siblings (E) : ancestors (F) : H I descendents (G) : subtree (C) : J K L M N

38 41

Representing Complete More Tree Terminology Binary Trees in an Array Tree T A depth (B) : 1 A From node i: height (G) : B C 2B 3 C 6 7 left child: 4D 5 E F G degree (B) : D E F G 89 10 11 12 right child: H I J K L parent: branching factor (T): H I implicit (array) implementation:

J K L M N ABCDEFGHIJKL 0 1 2 3 4 5 6 7 8 9 10111213

39 42

7 1/9/2014

Why this approach to storage? Heap – Insert(val)

Basic Idea: 1. Put val at “next” leaf position 2. Percolate up by repeatedly exchanging node until no longer needed

43 46

Heap Order Property Insert: percolate up 10 Heap order property : For every non-root 20 80 node X, the value in the parent of X is less than (or equal to) the value in X. 40 60 85 99 50 700 65 15 10 10 10 20 80 20 80 40 60 85 99 15 80 30 15 50 700 40 20 85 99 not a heap 50 700 65 60

44 47

Heap Operations Insert Code (optimized)

• findMin: void insert(Object o) { int percolateUp(int hole, assert(!isFull()); Object val) { • insert(val): percolate up. while (hole > 1 && size++; val < Heap[hole/2]) • deleteMin: percolate down. newPos = Heap[hole] = Heap[hole/2]; percolateUp(size,o); hole /= 2; } 10 Heap[newPos] = o; return hole; } } 20 80

40 60 85 99 50 700 65 runtime:

45 (Code in book) 48

8 1/9/2014

Insert: 16, 32, 4, 69, 105, 43, 2 Heap – Deletemin 0 1 2 3 4 5 6 7 8 Basic Idea: 1. Remove root (that is always the min!) 2. Put “last” leaf node at root 3. Find smallest child of node 4. Swap node with its smallest child if needed. 5. Repeat steps 3 & 4 until no swaps

needed. 49 52

DeleteMin: percolate down

10 20 15 Data Structures 40 60 85 99 50 700 65 Binary Heaps 15

20 65

40 60 85 99 50 700

50 53

DeleteMin Code (Optimized) Building a Heap Object deleteMin() { int percolateDown(int hole, assert(!isEmpty()); Object val) { while (2*hole <= size) { 12 5 11 3 10 6 9 4 8 1 7 2 returnVal = Heap[1]; left = 2*hole; size--; right = left + 1; newPos = if (right ≤ size && Heap[right] < Heap[left]) percolateDown(1, target = right; Heap[size+1]); else Heap[newPos] = target = left; Heap[size + 1]; if (Heap[target] < val) { return returnVal; Heap[hole] = Heap[target]; } hole = target; } else runtime: break; } (code in book) return hole; 51 54 }

9 1/9/2014

Buildheap pseudocode Building a Heap

private void buildHeap() { • Adding the items one at a time is O(n for ( int i = currentSize/2; i > 0; i-- ) log n) in the worst case percolateDown( i ); } • I promised O(n) for today

runtime:

55 58

BuildHeap: Floyd’s Method Working on Heaps

• What are the two properties of a heap? 12 – Structure Property – Order Property 5 11

3 10 6 9 • How do we work on heaps? – Fix the structure 4 8 1 7 2 – Fix the order

56 59

BuildHeap: Floyd’s Method BuildHeap: Floyd’s Method 12

5 11 12 5 11 3 10 6 9 4 8 1 7 2

Add elements arbitrarily to form a complete tree. 3 10 2 9 Pretend it’s a heap and fix the heap-order property!

12 4 8 1 7 6

5 11

3 10 6 9

4 8 1 7 2 57 60

10 1/9/2014

BuildHeap: Floyd’s Method 12 12 Finally… 5 11 5 11 1

3 10 2 9 3 1 2 9 3 2

4 8 1 7 6 4 8 10 7 6 4 5 6 9

12 8 10 7 11

runtime:

61 64

BuildHeap: Floyd’s Method 12 12 More Priority Queue Operations 5 11 5 11 • decreaseKey – given a pointer to an object in the queue, reduce its priority 3 10 2 9 3 1 2 9 value

4 8 1 7 6 4 8 10 7 6 Solution: change priority and 12 ______• increaseKey 5 2 – given a pointer to an object in the queue, increase its priority value 3 1 6 9 Why do we need a pointer ? Why not simply data value? Solution: change priority and 62 65 4 8 10 7 11 ______

BuildHeap: Floyd’s Method 12 12 More Priority Queue Operations 5 11 5 11 • Remove(objPtr) – given a pointer to an object in the queue, 3 10 2 9 3 1 2 9 remove the object from the queue

4 8 1 7 6 4 8 10 7 6 Solution : set priority to negative infinity, 12 12 percolate up to root and deleteMin

5 2 1 2 • FindMax

3 1 6 9 3 5 6 9

63 66 4 8 10 7 11 4 8 10 7 11

11 1/9/2014

Facts about Heaps Observations: • Finding a child/parent index is a multiply/divide by two • Operations jump widely through the heap hwww.mathcs.emory.edu/~cheung/Fcourses/323/Syllab • Each percolate step looks at only two new nodes us/book/PowerPoint/tries.ppt • Inserts are at least as common as deleteMins

Realities: e i mi nimize ze • Division/multiplication by powers of two are equally fast mize nimize ze nimize ze • Looking at only two new pieces of data: bad for cache! • With huge data sets, disk accesses dominate

67 Tries 70

Cycles to access:

CPU Preprocessing Strings • Preprocessing the pattern speeds up pattern Cache matching queries – After preprocessing the pattern, KMP’s algorithm performs pattern matching in time proportional to the text size • If the text is large, immutable and searched for often Memory (e.g., works by Shakespeare), we may want to preprocess the text instead of the pattern • A is a compact data structure for representing a set of strings, such as all the words in a text – A tries supports pattern matching queries in time proportional Disk to the pattern size

68 Tries 71

A Solution: d-Heaps Standard Tries • Each node has d • The standard trie for a set of strings S is an ordered tree such that: children 1 – Each node but the root is labeled with a character – The children of a node are alphabetically ordered • Still representable by – The paths from the external nodes to the root yield the strings of S array 3 7 2 • Example: standard trie for the set of strings • Good choices for d: S = { bear, bell, bid, bull, buy, sell, stock, stop } – (choose a power of two 4 8 5 12 11 10 6 9 b s for efficiency) – fit one set of children in a 12 1 3 7 2 4 8 5 12 11 10 6 9 e i u e t cache line a l d l y l o – fit one set of children on a memory page/disk block r l l l c p

69 Tries k 72

12 1/9/2014

Analysis of Standard Tries Compact Representation • A standard trie uses O(n) space and supports • Compact representation of a compressed trie for an array of strings: searches, insertions and deletions in time O(dm ), – Stores at the nodes ranges of indices instead of substrings where: – Uses O(s) space, where s is the number of strings in the array n total size of the strings in S – Serves as an auxiliary index structure m size of the string parameter of the operation 0 1 2 3 4 0 1 2 3 0 1 2 3 d size of the alphabet S[0] = s e e S[4] = b u l l S[7] = h e a r S[1] = b e a r S[5] = b u y S[8] = b e l l b s S[2] = s e l l S[6] = b i d S[9] = s t o p S[3] = s t o c k e i u e t

1, 0, 0 7, 0, 3 0, 0, 0 a l d l y l o

1, 1, 1 6, 1, 2 4, 1, 1 0, 1, 1 3, 1, 2 r l l l c p 1, 2, 3 8, 2, 3 4, 2, 3 5, 2, 2 0, 2, 2 2, 2, 3 3, 3, 4 9, 3, 3 Tries k 73 Tries 76

Word Matching with a Trie Suffix Trie insert the words of s e e a b e a r ? s e l l s t o c k ! the text into trie 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 • The suffix trie of a string X is the compressed trie of all the Each leaf is associated w/ one s e e a b u l l ? b u y s t o c k ! suffixes of X particular word 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 leaf stores indices b i d s t o c k ! b i d s t o c k ! m i n i m i z e where associated 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 word begins (“see” h e a r t h e b e l l ? s t o p ! starts at index 0 & 0 1 2 3 4 5 6 7 24, leaf for “see” 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 stores those indices)

b h s

e i u e e t e i mi nimize ze

a l d l y a e l o 47, 58 36 0, 24 mize nimize ze nimize ze r l l r l c p 6 78 30 69 12 84 k 74 Tries 17, 40, Tries 77 51, 62

Compressed Tries Analysis of Suffix Tries • A compressed trie has b s • Compact representation of the suffix trie for a string X internal nodes of degree at of size n from an alphabet of size d least two e id u ell to – Uses O(n) space • It is obtained from standard – Supports arbitrary pattern matching queries in X in O(dm ) trie by compressing chains of ar ll ll y ck p time, where m is the size of the pattern “redundant” nodes – Can be constructed in O(n) time • ex. the “i” and “d” in “bid” are “redundant” because they m i n i m i z e signify the same word b s 0 1 2 3 4 5 6 7

e i u e t a l d l y l o 7, 7 1, 1 0, 1 2, 7 6, 7 r l l l c p 4, 7 2, 7 6, 7 2, 7 6, 7

Tries k 75 Tries 78

13 1/9/2014

Encoding Trie (1) Huffman’s Algorithm Algorithm HuffmanEncoding (X) • A code is a mapping of each character of an alphabet to a binary • Given a string X, Input string X of size n code-word Huffman’s algorithm Output optimal encoding trie for X • A prefix code is a binary code such that no code-word is the prefix construct a prefix C ← distinctCharacters (X) of another code-word code the minimizes computeFrequencies (C, X ) • An encoding trie represents a prefix code the size of the Q ← new empty heap encoding of X ∈ – Each leaf stores a character for all c C • It runs in time T ← new single-node tree storing c – The code word of a character is given by the path from the root to the O(n + d log d), where Q.insert getFrequency c T leaf storing the character (0 for a left child and 1 for a right child ( ( ), ) n is the size of X and while Q.size () > 1 d is the number of ← f1 Q.min () distinct characters of ← T1 Q.removeMin () X ← 00 010 011 10 11 f2 Q.min () • A heap-based ← a b c d e T2 Q.removeMin () a d e priority queue is T ← join (T , T ) used as an auxiliary 1 2 Q.insert (f1 + f2, T) structure return Q.removeMin () Tries b c 79 Tries 82

Encoding Trie (2) Example11 a 6 • Given a text string X, we want to find a prefix code for the characters of X = abracadabra X that yields a small encoding for X Frequencies 2 4 – Frequent characters should have short code-words – Rare characters should have long code-words a b c d r c d b r • Example 5 2 1 1 2 – X = abracadabra 6

– T1 encodes X into 29 bits – T encodes X into 24 bits 2 4 2 a b c d r 5 2 1 1 2 T1 T2 a c d b r 5

2 2 4 c d b a b r a b c d r a c d b r a r c d 5 2 2 5 Tries 80 Tries 83

The End One More Operation

• Merge two heaps

• Add the items from one into another? – O(n log n)

• Start over and build it from scratch? – O(n)

84

14 1/9/2014

Definition: Null Path Length CSE 326: Data Structures null path length (npl) of a node x = the number of nodes between x and a null in its subtree OR npl(x) = min distance to a descendant with 0 or 1 children Priority Queues • npl (null) = -1 ? Leftist Heaps & Skew Heaps • npl (leaf) = 0 • npl (single-child node) = 0 ? ?

Equivalent definitions: 0 1 ? 0 1. npl(x) is the height of largest complete subtree rooted at x 0 0 0 2. npl(x) = 1 + min{ npl (left(x)), npl (right(x))} 85 88

Leftist Heap Properties New Heap Operation: Merge • Heap-order property – parent’s priority value is ≤ to childrens’ priority Given two heaps, merge them into one values heap – result: minimum element is at the root – first attempt: insert each element of the smaller heap into the larger. • Leftist property runtime: – For every node x, npl (left( x)) ≥ npl (right( x)) – result: tree is at least as “heavy” on the left as – second attempt: concatenate binary heaps’ the right arrays and run buildHeap.

runtime: 86 89

Leftist Heaps Are These Leftist?

Idea: 2 2 0

Focus all heap maintenance work in 1 1 1 1 0 one small part of the heap 0 1 0 0 1 0 0 0 1

Leftist heaps: 0 0 0 0 0 0 0 0 1. Most nodes are on the left 0 Every subtree of a leftist 2. All the merging work is done on the right 0 tree is leftist! 0 87 90

15 1/9/2014

Right Path in a Leftist Tree is Short (#1) Merge two heaps (basic idea) Claim: The right path is as short as any in the tree. • Put the smaller root as the new root, Proof: (By contradiction) Pick a shorter path: D 1 < D 2 • Hang its left subtree on the left. Say it diverges from right path at x x • Recursively merge its right subtree and ≤ npl (L) D1-1 because of the path of the other tree. length D 1-1 to null L R D 1 D2 ≥ npl (R) D2-1 because every node on right path is leftist

Leftist property at x violated! 91 94

Right Path in a Leftist Tree is Short (#2) Merging Two Leftist Heaps r Claim: If the right path has nodes, then the tree • merge(T 1,T 2) returns one leftist heap has at least containing all elements of the two 2r-1 nodes. (distinct) leftist heaps T 1 and T 2 Proof: (By induction) merge 1 Base case : r=1 . Tree has at least 2 -1 = 1 node T1 a a Inductive step : assume true for r’< r . Prove for tree with merge right path at least r. 1. Right subtree: right path of r-1 nodes r-1 L1 R1 a < b L1 ⇒ 2 -1 right subtree nodes (by induction) R1 2. Left subtree: also right path of length at least r-1 (by T2 b previous slide) ⇒ 2r-1-1 left subtree nodes (by induction) b Total tree size: ( 2r-1-1) + (2 r-1-1) + 1 = 2 r-1

92 L R 95 2 2 L2 R2

Why do we have the leftist Merge Continued property? a a Because it guarantees that: If npl (R ’) > npl (L ) • the right path is really short compared to 1 the number of nodes in the tree L R’ ’ • A leftist tree of N nodes, has a right path 1 R L1 ’ of at most lg (N+1) nodes R = Merge(R 1, T 2)

Idea – perform all work on the right path runtime:

93 96

16 1/9/2014

Let’s do an example, but first… Sewing Up the Example Other Heap Operations ? ? 1 3 3 3

0 0 0 • insert ? 7 ? 7 1 7 1 5 5 5 0 0 0 0 0 0 14 0 14 0 0 14 10 8 • deleteMin ? 10 8 10 8 0 0 0 12 12 12 Done?

97 100

Operations on Leftist Heaps • merge with two trees of total size n: O(log n) Finally… • insert with heap size n: O(log n) – pretend node is a size 1 leftist heap 1 1 – insert by merging original heap with one node 3 3 heap 0 0 merge 7 5 1 5 1 7

0 0 0 0 0 0 • deleteMin with heap size n: O(log n) 14 10 8 10 8 14 – remove and return root 0 0 – merge left and right subtrees 12 12

merge 98 101

Leftest Merge Example merge ? 1 Leftist Heaps: Summary 5 3 0 merge 0 0 7 10 12 1 ? Good 5 5 0 1 • 3 14 0 0 0 merge 10 12 10 0 • 0 0 12 7 8 0 0 8 8 0 14 Bad

0 • 8 (special case) • 0 1299 102

17 1/9/2014

Random Definition: Example merge Amortized Time 5 3 3 am·or·tized time: merge 7 5 7 Running time limit resulting from “writing off” 10 12 5 expensive merge 14 10 14 runs of an algorithm over multiple cheap runs of the 3 10 12 12 algorithm, usually resulting in a lower overall 8 running time 7 8 If M operations take total O(M log N) time, 8 than indicated by the worst possible case. 3 amortized time per operation is O(log N) 14 Difference from average time: 5 7

8 10 14

103 12 106

Skew Heaps Skew Heap Code Problems with leftist heaps void merge(heap1, heap2) { – extra storage for npl case { – extra complexity/logic to maintain and check npl heap1 == NULL : return heap2; – right side is “often” heavy and requires a switch heap2 == NULL : return heap1; Solution: skew heaps heap1.findMin() < heap2.findMin() : – “blindly” adjusting version of leftist heaps temp = heap1.right; – merge always switches children when fixing right heap1.right = heap1.left; path heap1.left = merge(heap2, temp); – amortized time for: merge, insert, deleteMin = O(log return heap1; n) otherwise : – however, worst case time for all three = O(n) return merge(heap2, heap1); } 104 107 }

Runtime Analysis: Merging Two Skew Heaps Worst-case and Amortized merge • No worst case guarantee on right path T1 a a length! merge • All operations rely on merge

L1 R1 a < b L1 R1 ⇒ worst case complexity of all ops = T • Probably won’t get to amortized analysis in 2 b b this course, but see Chapter 11 if curious. • Result: M merges take time M log n L R 2 2 L2 R2 ⇒ amortized complexity of all ops = Only one step per iteration, with children always switched 105 108

18 1/9/2014

The Binomial Tree, B h Comparing Heaps h •Bh has height h and exactly 2 nodes •B is formed by making B a child of another B • Binary Heaps • Leftist Heaps h h-1 h-1 • Root has exactly h children • Number of nodes at depth d is binomial coeff. – Hence the name; we will not use this last property  h     d 

• d-Heaps • Skew Heaps B0 B1 B2 B3

Still room for improvement! (Where?) 109 112

Binomial Queue with n elements

Binomial Q with n elements has a unique structural Data Structures representation in terms of binomial trees!

Binomial Queues Write n in binary: n = 1101 (base 2) = 13 (base 10)

1 B3 1 B2 No B1 1 B0

110 113

Properties of Binomial Queue Yet Another Data Structure: Binomial Queues • At most one binomial tree of any height

• Structural property • n nodes ⇒ binary representation is of size ? – Forest of binomial trees with at most ⇒ deepest tree has height ? one tree of any height ⇒ number of trees is ? What’s a forest?

What’s a binomial tree? Define : height(forest F) = max tree T in F { height(T) } • Order property – Each binomial tree has the heap-order Θ property Binomial Q with n nodes has height (log n)

111 114

19 1/9/2014

Example: Binomial Queue Operations on Binomial Queue Merge • Will again define merge as the base operation H1: H2: – insert, deleteMin, buildBinomialQ will use merge 1 -1 3 5

• Can we do increaseKey efficiently? 7 2 1 3 21 9 6 decreaseKey? 8 11 5 7

• What about findMin? 6

115 118

Merging Two Binomial Queues Example: Binomial Queue Merge Essentially like adding two binary numbers! H1: H2: 1. Combine the two forests 2. For k from 0 to maxheight { 1 -1 5 ← a. m total number of Bk’s in the two BQs b. if m=0: continue; # of 1’s 7 3 2 1 3 9 6 c. if m=1: continue; 0+0 = 0 11 5 7 d. if m=2: combine the two Bk’s to form a1+0 = 1 21 8 Bk+1 1+1 = 0+c e. if m=3: retain one Bk and 1+1+c = 1+c 6 combine the other two to form a Bk+1 } Claim: When this process ends, the forest has at most one tree of any height 116 119

Example: Binomial Queue Example: Binomial Queue Merge Merge H1: H2: H1: H2:

21 1 -1 3 5 1 -1

7 2 1 3 9 6 7 3 5 2 1 3

8 11 5 7 21 9 6 8 11 5

6 7 6

117 120

20 1/9/2014

Example: Binomial Queue Insert in a Binomial Queue Merge Insert( x): Similar to leftist or skew heap H1: H2:

-1 runtime Worst case complexity: same as merge 2 1 3 1 O( ) 8 11 5 7 3 5 Average case complexity: O(1) 6 21 9 6 Why?? Hint: Think of adding 1 to 1101 7

121 124

Example: Binomial Queue Merge deleteMin in Binomial Queue H1: H2: Similar to leftist and skew heaps….

-1

2 1 3 1

8 11 5 7 3 5

6 21 9 6

7

122 125

deleteMin: Example Complexity of Merge BQ 7 3 Constant time for each height 4 8 Max number of heights is: log n 5 7

⇒ worst case running time = Θ( ) find and delete smallest root merge BQ (without the shaded part) 8 5 and BQ’ BQ’ 7 123 126

21 1/9/2014

deleteMin: Example

Result: 7 4

8 5

7

runtime:

127

22