CECS 328 Lectures
Total Page:16
File Type:pdf, Size:1020Kb
CECS 328 Lectures Darin Goldstein 1 Review of Asymptotics 1. The Force: Certain functions are always eventually larger than others. This list goes from smallest to largest. (a) constants, sin, cos, tan−1 constant (b) (log n) (c) nconstant (d) constantn (e) n! (f) nn Be very careful with the final two levels. YOU MAY ONLY USE THE FORCE ADDITIVELY, NOT MULTIPLICATIVELY! Examples below. 2. Growth of functions: All functions we consider in this class will be even- tually positive. (a) O: f = O(g) ) There exists constants c > 0 and Nc such that for 2 3 every n ≥ Nc, f(x) ≤ cg(x). Example: 5x + 20 = O(x ) (b) Ω: f = Ω(g) ) There exists constants c > 0 and Nc such that for x3 2 every n ≥ Nc, f(x) ≥ cg(x). Example: 3 − 9x = Ω(x ) (c) Θ: f = Θ(g) ) f = O(g) and f = Ω(g). Example: 3x2 − 8x + 2 = Θ(x2) f(x) 2 3 (d) o: f = o(g) ) limx!1 g(x) = 0. Example: x = o(x ) g(x) p (e) !: f = !(g) ) limx!1 f(x) = 0. Example: x = !(log x) The following are exercises based on what you've learned so far: 1. Find the smallest n so that f = O(xn) if such an exists. Find the largest n so that f = Ω(xn) if such an n exists. Find a function g so that f = Θ(g). (a) f(x) = (x3 + x2 log x)(log x + 1) + (17 log x + 19)(x3 + 2) 6 x −3x+12p (b) f(x) = x2 log x+πx x 1 2 3 p 5x log x+x (c) f(x) = x(log x)2+x3 log x (d) f(x) = (2x + x2)(x3 + 3x) x 2 (e) f(x) = x2 + xx 2. Find a function g(x; y) such that f = Θ(g). (Notice that both x and y are variables.) f(x; y) = (x2 + xy + x log y)3 3. Show that for any two positive constants a and b, loga x = O(logb x). 4. Assume that all functions are strictly positive and increasing. True or false: (a) x2 = o(x3); x log x = !(x2); 2x = !(x2); x2 = o(x2) (b) f = o(g) ) 2f = o(2g) (c) f = !(g) ) log(f) = !(log g) (d) f1 = O(g1) and f2 = O(g2) ) f1 + f2 = O(g1 + g2) (e) f1 = o(g1) and f2 = o(g2) ) jf1 − f2j = o(jg1 − g2j) (f) log n! = Θ(n log n) 2 Master Method The Master Method is as follows: Let a ≥ 1 and b > 1 be constants, let f(n) be a function, and let T (n) be defined on the nonnegative integers by the recurrence T (n) = aT (n=b) + f(n) where we interpret n=b to mean either bn=bc or dn=be. Then T (n) can be bounded asymptotically as follows: 1. If f(n) = O(nlogb a−) for some > 0, then T (n) = Θ(nlogb a). 2. If f(n) = Θ(nlogb a)), then T (n) = Θ(nlogb a log n). 3. If f(n) = Ω(nlogb a+) for some constant > 0, and if af(n=b) ≤ cf(n) for some constant c < 1 and all sufficiently large n, then T (n) = Θ(f(n)). Find asymptotic bounds for the following problems using the Master Method if possible: 1. T (n) = 4T (n=2) + n 2. T (n) = T (2n=3) + 1 2 3. T (n) = 3T (n=4) + n log2 n 4. T (n) = 2T (n=2) + n log2 n To show that the Master Method works, unroll the recursion to get the following: T (n) = f(n) + af(n=b) + a2f(n=b2) + ::: + ak−1(aT (n=bk) + f(n=bk−1)) The first question becomes, what is k? When does the recursion stop? k n=b = 1 ) k = logb n ) k−1 X T (n) = alogb nT (n=blogb n) + aif(n=bi) ) i=0 T (n=blogb n) = T (1) = Θ(1) ) alogb nT (n=blogb n) = Θ(nlogb a) Pk−1 i i So now the only question is what happens with the term i=0 a f(n=b ). There are 3 cases to consider. 1. Assume that f(n) = O(nlogb a−) for some > 0. Then by the definition of O notation, 9c > 0 such that eventually f(n) ≤ cnlogb a−. This implies the following. k−1 k−1 k−1 X X X aif(n=bi) ≤ c ai(n=bi)logb a− = c biainlogb a−=ai = i=0 i=0 i=0 k−1 X cnlogb a− bi ≤ cnlogb a−O(bk) = O(nlogb a) i=0 Therefore T (n) = Θ(nlogb a). 2. Assume that f(n) = Θ(nlogb a). Then k−1 k−1 X X aif(n=bi) ∼ ai(n=bi)logb a = knlogb a i=0 i=0 k = logb n ) knlogb a ∼ nlogb a log n 3. Assume that f(n) = Ω(nlogb a+) and 9c < 1 such that eventually af(n=b) ≤ cf(n). By the definition of Ω, 9c0 > 0 such that eventually f(n) ≥ c0nlogb a+. k−1 k−1 1 X X X aif(n=bi) ≤ cif(n) ≤ cif(n) ∼ f(n) i=0 i=0 i=0 f(n) = Ω(nlogb a+) ) T (n) = Ω(f(n)) ) T (n) = Θ(f(n)) 3 3 Divide and Conquer: Majority Element, Clos- est Point Pair 3.1 Majority Element An array A[1; n] is said to have a majority element if strictly more than half of its entries are the same. Given an array, the task is to design an efficient algorithm to tell whether the array has a majority element, and, if so, to find that element. The elements of the array are not necessarily from some ordered domain like the integers, and so there can be no comparisons of the form \Is A[i] > A[j]?". (Think of the array elements as pictures, say.) However you can answer questions of the form: \Is A[i] = A[j]?" in constant time. The naive way to do this is to compare every element to every other element for a running time of O(n2). There are two ways to accomplish this faster and both use divide and con- quer. 1. Split the array A into two halves, A1 and A2. If there is a majority element in the array A, then it must be a majority element in one of the halves; if an element x is not a majority element in at least one of the halves, then there is no way that it can sum to more than half the total in A. Recursively determine the majority element in each of the halves (should one exist) and then determine, for both possibilities (should either or both exist) x1 and x2 whether they are majority elements for the full array by brute-force search; base cases for n = 1 and n = 2 are easy. If T (n) is the time it takes to find a majority element in an array of size n, then T (n) = 2T (n=2) + O(n) ) T (n) = O(n log n) 2. Consider the following operation: If the array has an odd number in it, choose any element and check to see if it is a majority element. If so, you're done. If not, throw it away. Assume the array has an even number of elements. Split the array into pairs of 2 elements. For each pair, if they are the same element, keep one of the elements to add to a new array; otherwise, throw away both elements in the pair. Obviously, the majority element must survive until the very end: If you throw away at least one bad element for every majority element, then the majority element must remain a majority element after each step in the process. 3.2 Closest Point Pair Claim: Given a δ × δ square and the rule that every point must be at least distance δ from every other point, there are at most O(1) points that can fit in the square. 4 You are given a set of n points in the plane and you want to determine the closest pair of points. There is a simple O(n2) algorithm to do it, but it's not good enough. Use the following algorithm. Sort all the points on x- and y-coordinate (if they are not already sorted) before the algorithm begins. We can now assume that all inputs to the following function are sorted on both. 1. If n ≤ 3, just compute and return the answer via brute force. 2. Divide the points into two roughly equal-sized sets and recursively find the closest pair. Let the closest pair on the left side be fp1; p2g and on the right side fq1; q2g. Let δ = minfd(p1; p2); d(q1; q2)g. Let L be the vertical line in the middle of the two pairs of points. 3. If there is a pair of points with distance closer than δ, then one point must be on the left and one on the right. Both points must be within distance δ of L. Remove from consideration all points that are further from L than δ. Note that all points are either on one side of L or the other. 4. Starting at the point p with the lowest y-coordinate, consider which points may be within δ distance of p. These points must be within a δ × δ square on the other side of L.