
CSE 373: Disjoint sets continued Michael Lee Friday, Mar 2, 2018 1 Warmup Consider the following disjoint set. What happens if we run findSet(8) then union(4, 17)? Note: the union(...) method internally calls findSet(...). r=0 r=3 r=3 0 1 11 2 6 12 15 3 4 5 7 10 13 14 16 17 8 9 18 2 r=0 r=3 r=3 0 1 11 2 6 7 8 12 15 3 4 5 10 9 13 14 16 17 18 Step 3: We move each node we passed by (every red node) to point directly at the root. Note: we do not update the rank (too expensive) Warmup What happens when we run findSet(8)? r=0 r=3 r=3 0 1 11 2 6 12 15 3 4 5 7 10 13 14 16 17 8 9 18 Step 1: We find the node corresponding to 8 in O (1) time 3 r=0 r=3 r=3 0 1 11 2 6 7 8 12 15 3 4 5 10 9 13 14 16 17 18 Step 3: We move each node we passed by (every red node) to point directly at the root. Note: we do not update the rank (too expensive) Warmup What happens when we run findSet(8)? r=0 r=3 r=3 0 1 11 2 6 12 15 3 4 5 7 10 13 14 16 17 8 9 18 Step 2: We travel up the tree until we find the root 3 Note: we do not update the rank (too expensive) Warmup What happens when we run findSet(8)? r=0 r=3 r=3 0 1 11 2 6 7 8 12 15 3 4 5 10 9 13 14 16 17 18 Step 3: We move each node we passed by (every red node) to point directly at the root. 3 Warmup What happens when we run findSet(8)? r=0 r=3 r=3 0 1 11 2 6 7 8 12 15 3 4 5 10 9 13 14 16 17 18 Step 3: We move each node we passed by (every red node) to point directly at the root. Note: we do not update the rank (too expensive) 3 Warmup What happens if we run union(4, 17)? r=0 r=3 r=3 0 1 11 2 6 7 8 12 15 3 4 5 10 9 13 14 16 17 18 4 Warmup What happens if we run union(4, 17)? r=0 r=3 r=3 0 1 11 2 6 7 8 12 15 3 4 5 10 9 13 14 16 17 18 Step 1: We first run findSet(4) 4 Warmup What happens if we run union(4, 17)? r=0 r=3 r=3 0 1 11 2 6 7 8 12 15 3 4 5 10 9 13 14 16 17 18 Step 1: We first run findSet(4). So we need to crawl up and find the parent... 4 Warmup What happens if we run union(4, 17)? r=0 r=3 r=3 0 1 11 2 4 6 7 8 12 15 3 5 10 9 13 14 16 17 18 Step 1: We first run findSet(4). So we need to crawl up and find the parent... ...and make node “4” point directly at the root. 4 Warmup What happens if we run union(4, 17)? r=0 r=3 r=3 0 1 11 2 4 6 7 8 12 15 3 5 10 9 13 14 16 17 18 Step 2: We next run findSet(17) and repeat the process. 4 Warmup What happens if we run union(4, 17)? r=0 r=3 r=3 0 1 11 2 4 6 7 8 12 15 3 5 10 9 13 14 16 17 18 Step 2: We next run findSet(17) and repeat the process. 4 Warmup What happens if we run union(4, 17)? r=0 r=3 r=3 0 1 11 2 4 6 7 8 12 15 17 3 5 10 9 13 14 16 18 Step 2: We next run findSet(17) and repeat the process. 4 Warmup We’ve finished findSet(4) and findSet(17), so now we need to finish the rest of union(4, 17) by linking the two trees together. r=0 r=3 r=3 0 1 11 2 4 6 7 8 12 15 17 3 5 10 9 13 14 16 18 5 Warmup We’ve finished findSet(4) and findSet(17), so now we need to finish the rest of union(4, 17) by linking the two trees together. r=0 r=3 r=3 0 1 11 2 4 6 7 8 12 15 17 3 5 10 9 13 14 16 18 The ranks are the same, so we arbitrarily make set 1 the root and make set 11 the child. 5 Warmup We’ve finished findSet(4) and findSet(17), so now we need to finish the rest of union(4, 17) by linking the two trees together. r=0 r=4 0 1 2 4 6 7 8 11 3 5 10 9 12 15 17 13 14 16 18 We then update the rank of set 1 and “forget” the rank of set 11. 5 O (1) – still the same In the best case, O (1), in the worst case O (log(n)) In the best case, O (1), in the worst case O (log(n)) Path compression: runtime Now, what are the worst-case and best-case runtime of the following? I makeSet(x): I findSet(x): I union(x, y): 6 Path compression: runtime Now, what are the worst-case and best-case runtime of the following? I makeSet(x): O (1) – still the same I findSet(x): In the best case, O (1), in the worst case O (log(n)) I union(x, y): In the best case, O (1), in the worst case O (log(n)) 6 Back to Kruskal’s Why are we doing this? To help us implement Kruskal’s algorithm: def kruskal(): for (v : vertices): makeMST(v) sort edges in ascending order by their weight mst = new SomeSet<Edge>() for (edge : edges): if findMST(edge.src) != findMST(edge.dst): union(edge.src, edge.dst) mst.add(edge) return mst I makeMST(v) takes O (tm) time I findMST(v): takes O (tf ) time I union(u, v): takes O (tu) time 7 Well, we just said that in the worst case: I tm 2 O (1) I tf 2 O (log(jV j)) I tu 2 O (log(jV j)) So the worst-case overall runtime of Kruskal’s is: O (jV j + jEj·log(jEj) + (jEj + jV j)·log(jV j)) Back to Kruskal’s We concluded that the runtime is: 0 1 B C O @jV j · tm + jEj·log(jEj) + jEj·tf + jV j·tuA | {z } | {z } | {z } setup sorting edges core loop 8 So the worst-case overall runtime of Kruskal’s is: O (jV j + jEj·log(jEj) + (jEj + jV j)·log(jV j)) Back to Kruskal’s We concluded that the runtime is: 0 1 B C O @jV j · tm + jEj·log(jEj) + jEj·tf + jV j·tuA | {z } | {z } | {z } setup sorting edges core loop Well, we just said that in the worst case: I tm 2 O (1) I tf 2 O (log(jV j)) I tu 2 O (log(jV j)) 8 Back to Kruskal’s We concluded that the runtime is: 0 1 B C O @jV j · tm + jEj·log(jEj) + jEj·tf + jV j·tuA | {z } | {z } | {z } setup sorting edges core loop Well, we just said that in the worst case: I tm 2 O (1) I tf 2 O (log(jV j)) I tu 2 O (log(jV j)) So the worst-case overall runtime of Kruskal’s is: O (jV j + jEj·log(jEj) + (jEj + jV j)·log(jV j)) 8 One minor improvement: since our edge weights are numbers, we can likely use a linear sort and improve the runtime to: O (jV j + jEj + (jEj + jV j)·log(jV j)) We can drop the jV j + jEj (they’re dominated by the last term): O (jEj + jV j)·log(jV j)) ...and we’re left with something that’s basically the same as Prim. Back to Kruskal’s Our worst-case runtime: O (jV j + jEj·log(jEj) + (jEj + jV j)·log(jV j)) 9 We can drop the jV j + jEj (they’re dominated by the last term): O (jEj + jV j)·log(jV j)) ...and we’re left with something that’s basically the same as Prim. Back to Kruskal’s Our worst-case runtime: O (jV j + jEj·log(jEj) + (jEj + jV j)·log(jV j)) One minor improvement: since our edge weights are numbers, we can likely use a linear sort and improve the runtime to: O (jV j + jEj + (jEj + jV j)·log(jV j)) 9 ...and we’re left with something that’s basically the same as Prim. Back to Kruskal’s Our worst-case runtime: O (jV j + jEj·log(jEj) + (jEj + jV j)·log(jV j)) One minor improvement: since our edge weights are numbers, we can likely use a linear sort and improve the runtime to: O (jV j + jEj + (jEj + jV j)·log(jV j)) We can drop the jV j + jEj (they’re dominated by the last term): O (jEj + jV j)·log(jV j)) 9 Back to Kruskal’s Our worst-case runtime: O (jV j + jEj·log(jEj) + (jEj + jV j)·log(jV j)) One minor improvement: since our edge weights are numbers, we can likely use a linear sort and improve the runtime to: O (jV j + jEj + (jEj + jV j)·log(jV j)) We can drop the jV j + jEj (they’re dominated by the last term): O (jEj + jV j)·log(jV j)) ...and we’re left with something that’s basically the same as Prim.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages122 Page
-
File Size-