
CPS 230 DESIGN AND ANALYSIS OF ALGORITHMS Fall 2008 Instructor: Herbert Edelsbrunner Teaching Assistant: Zhiqiang Gu CPS 230 Fall Semester of 2008 Table of Contents 1 Introduction 3 IV GRAPH ALGORITHMS 45 I DESIGN TECHNIQUES 4 13 GraphSearch 46 14 Shortest Paths 50 2 Divide-and-Conquer 5 15 MinimumSpanningTrees 53 3 Prune-and-Search 8 16 Union-Find 56 4 DynamicProgramming 11 Fourth Homework Assignment 60 5 GreedyAlgorithms 14 First Homework Assignment 17 V TOPOLOGICAL ALGORITHMS 61 II SEARCHING 18 17 GeometricGraphs 62 18Surfaces 65 6 BinarySearchTrees 19 19Homology 68 7 Red-BlackTrees 22 Fifth Homework Assignment 72 8 AmortizedAnalysis 26 9 SplayTrees 29 VI GEOMETRIC ALGORITHMS 73 Second Homework Assignment 33 20 Plane-Sweep 74 III PRIORITIZING 34 21 DelaunayTriangulations 77 22 AlphaShapes 81 10 HeapsandHeapsort 35 Sixth Homework Assignment 84 11 FibonacciHeaps 38 12 SolvingRecurrenceRelations 41 VII NP-COMPLETENESS 85 ThirdHomeworkAssignment 44 23 EasyandHardProblems 86 24 NP-CompleteProblems 89 25 ApproximationAlgorithms 92 Seventh Homework Assignment 95 2 1 Introduction Overview. The main topics to be covered in this course are Meetings. We meet twice a week, on Tuesdays and Thursdays, from 1:15 to 2:30pm, in room D106 LSRC. I Design Techniques; II Searching; Communication. The course material will be delivered III Prioritizing; in the two weekly lectures. A written record of the lec- IV Graph Algorithms; tures will be available on the web, usually a day after the V Topological Algorithms; lecture. The web also contains other information, such as homework assignments, solutions, useful links, etc. The VI Geometric Algorithms; main supporting text is VII NP-completeness. TARJAN. Data Structures and Network Algorithms. SIAM, 1983. The emphasis will be on algorithm design and on algo- rithm analysis. For the analysis, we frequently need ba- The book focuses on fundamental data structures and sic mathematical tools. Think of analysis as the measure- graph algorithms, and additional topics covered in the ment of the quality of your design. Just like you use your course can be found in the lecture notes or other texts in sense of taste to check your cooking, you should get into algorithms such as the habit of using algorithm analysis to justify design de- KLEINBERG AND TARDOS. Algorithm Design. Pearson Ed- cisions when you write an algorithm or a computer pro- ucation, 2006. gram. This is a necessary step to reach the next level in mastering the art of programming. I encourage you to im- plement new algorithms and to compare the experimental Examinations. There will be a final exam (covering the performance of your program with the theoretical predic- material of the entire semester) and a midterm (at the be- tion gained through analysis. ginning of October), You may want to freshen up your math skills before going into this course. The weighting of exams and homework used to determine your grades is homework 35%, midterm 25%, final 40%. Homework. We have seven homeworks scheduled throughout this semester, one per main topic covered in the course. The solutions to each homework are due one and a half weeks after the assignment. More precisely, they are due at the beginning of the third lecture after the assignment. The seventh homework may help you prepare for the final exam and solutions will not be collected. Rule 1. The solution to any one homework question must fit on a single page (togetherwith the statement of the problem). Rule 2. The discussion of questions and solutions before the due date is not discouraged, but you must formu- late your own solution. Rule 3. The deadline for turning in solutions is 10 min- utes after the beginningof the lecture on the due date. 3 I DESIGN TECHNIQUES 2 Divide-and-Conquer 3 Prune-and-Search 4 Dynamic Programming 5 Greedy Algorithms First Homework Assignment 4 2 Divide-and-Conquer 7 3 5 4 2 9 4 2 1 We use quicksort as an example for an algorithm that fol- i j lows the divide-and-conquer paradigm. It has the repu- tation of being the fasted comparison-based sorting algo- 7 3 5 4 21 4 2 9 rithm. Indeed it is very fast on the average but can be slow i j for some input, unless precautions are taken. 2 3 5 4 21 4 7 9 The algorithm. Quicksort follows the general paradigm m of divide-and-conquer, which means it divides the un- sorted array into two, it recurses on the two pieces, and it Figure 1: First, i and j stop at items 9 and 1, which are then finally combines the two sorted pieces to obtain the sorted swapped. Second, i and j cross and the pivot, 7, is swapped with array. An interesting feature of quicksort is that the divide item 2. step separates small from large items. As a consequence, combining the sorted pieces happens automatically with- out doing anything extra. Special cases (i) and (iii) are ok but case (ii) requires a stopper at A[r + 1]. This stopper must be an item at least as large as x. If r<n 1 this stopper is automatically void QUICKSORT(int ℓ, r) given. For r = n 1, we− create such a stopper by setting if ℓ < r then m = SPLIT(ℓ, r); A[n]=+ . − QUICKSORT(ℓ,m 1); − ∞ QUICKSORT(m +1, r) endif. Running time. The actions taken by quicksort can be expressed using a binary tree: each (internal) node repre- We assume the items are stored in A[0..n 1]. The array − sents a call and displays the length of the subarray; see is sorted by calling QUICKSORT(0,n 1). − Figure 2. The worst case occurs when A is already sorted. Splitting. The performance of quicksort depends heav- ily on the performanceof the split operation. The effect of 9 splitting from ℓ to r is: 7 1 1 5 x = A[ℓ] is moved to its correct location at A[m]; 2 2 • no item in A[ℓ..m 1] is larger than x; 1 1 • − no item in A[m +1..r] is smaller than x. Figure 2: The total amount of time is proportional to the sum • of lengths, which are the numbers of nodes in the corresponding Figure 1 illustrates the process with an example. The nine subtrees. In the displayed case this sum is 29. items are split by moving a pointer i from left to right and another pointer j from right to left. The process stops In this case the tree degenerates to a list without branch- when i and j cross. To get splitting right is a bit delicate, ing. The sum of lengths can be described by the following in particular in special cases. Make sure the algorithm is recurrence relation: correct for (i) x is smallest item, (ii) x is largest item, (iii) n n +1 all items are the same. T (n) = n + T (n 1) = i = . − 2 i=1 int SPLIT(int ℓ, r) X x = A[ℓ]; i = ℓ; j = r +1; The running time in the worst case is therefore in O(n2). repeat repeat i++ until x A[i]; In the best case the tree is completely balanced and the repeat j-- until x ≤ A[j]; ≥ sum of lengths is described by the recurrence relation if i < j then SWAP(i, j) endif until i j; n 1 ≥ T (n) = n +2 T − . SWAP(ℓ, j); return j. · 2 5 If we assume n =2k 1 we can rewrite the relation as By assumption on function RSPLIT, the probability for − 1 each m [0,n 1] is n . Therefore the average sum k ∈ − U(k) = (2 1)+2 U(k 1) of array lengths split by QUICKSORT is − · − = (2k 1)+ 2(2k−1 1)+ . +2k−1(2 1) − − − n−1 k−1 1 k i T (n) = n + (T (m)+ T (n m 1)) . = k 2 2 n · − − · − m=0 i=0 X X = 2k k (2k 1) · − − To simplify, we multiply with n and obtain a second rela- = (n + 1) log2(n + 1) n. tion by substituting n 1 for n: · − − The running time in the best case is therefore in n−1 O(n log n). n T (n) = n2 +2 T (i), (1) · · i=0 X n−2 Randomization. One of the drawbacks of quicksort, as (n 1) T (n 1) = (n 1)2 +2 T (i). (2) − · − − · described until now, is that it is slow on rather common i=0 almost sorted sequences. The reason are pivots that tend X to create unbalanced splittings. Such pivots tend to oc- Next we subtract (2) from (1), we divide by n(n + 1), we cur in practice more often than one might expect. Hu- use repeated substitution to express T (n) as a sum, and man and often also machine generated data is frequently finally split the sum in two: biased towards certain distributions (in this case, permuta- tions), and it has been said that 80% of the time or more, T (n) T (n 1) 2n 1 = − + − sorting is done on either already sorted or almost sorted n +1 n n(n + 1) files. Such situations can often be helped by transferring T (n 2) 2n 3 2n 1 the algorithm’s dependence on the input data to internally = − + − + − n 1 (n 1)n n(n + 1) made random choices. In this particular case, we use ran- n − − domization to make the choice of the pivot independent of 2i 1 = − the input data. Assume RANDOM(ℓ, r) returns an integer i(i + 1) i=1 p [ℓ, r] with uniform probability: X n n ∈ 1 1 = 2 .
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages95 Page
-
File Size-