Algorithms and Computational Complexity: an Overview
Total Page:16
File Type:pdf, Size:1020Kb
Algorithms and Computational Complexity: an Overview Winter 2011 Larry Ruzzo Thanks to Paul Beame, James Lee, Kevin Wayne for some slides 1 goals Design of Algorithms – a taste design methods common or important types of problems analysis of algorithms - efficiency 2 goals Complexity & intractability – a taste solving problems in principle is not enough algorithms must be efficient some problems have no efficient solution NP-complete problems important & useful class of problems whose solutions (seemingly) cannot be found efficiently 3 complexity example Cryptography (e.g. RSA, SSL in browsers) Secret: p,q prime, say 512 bits each Public: n which equals p x q, 1024 bits In principle there is an algorithm that given n will find p and q:" try all 2512 possible p’s, but an astronomical number In practice no fast algorithm known for this problem (on non-quantum computers) security of RSA depends on this fact (and research in “quantum computing” is strongly driven by the possibility of changing this) 4 algorithms versus machines Moore’s Law and the exponential improvements in hardware... Ex: sparse linear equations over 25 years 10 orders of magnitude improvement! 5 algorithms or hardware? 107 25 years G.E. / CDC 3600 progress solving CDC 6600 G.E. = Gaussian Elimination 106 sparse linear CDC 7600 Cray 1 systems 105 Cray 2 Hardware " 104 alone: 4 orders Seconds Cray 3 (Est.) 103 of magnitude 102 101 Source: Sandia, via M. Schultz! 100 1960 1970 1980 1990 6 2000 algorithms or hardware? 107 25 years G.E. / CDC 3600 CDC 6600 G.E. = Gaussian Elimination progress solving SOR = Successive OverRelaxation 106 CG = Conjugate Gradient sparse linear CDC 7600 Cray 1 systems 105 Cray 2 Hardware " 104 alone: 4 orders Seconds Cray 3 (Est.) 103 of magnitude Sparse G.E. Gauss-Seidel 102 Software alone: 101 SOR 6 orders of CG magnitude Source: Sandia, via M. Schultz! 100 1960 1970 1980 1990 7 2000 algorithms or hardware? The " N-Body " Problem: in 30 years" 107 hardware" 1010 software Source: T.Quinn! 8 algorithms: a definition Procedure to accomplish a task or solve a well-specified problem Well-specified: know what all possible inputs look like and what output looks like given them “accomplish” via simple, well-defined steps Ex: sorting names (via comparison) Ex: checking for primality (via +, -, *, /, ≤) 9 algorithms: a sample problem Printed circuit-board company has a robot arm that solders components to the board Time: proportional to total distance the arm must move from initial rest position around the board and back to the initial position For each board design, find best order to do the soldering 10 printed circuit board 11 printed circuit board 12 more precise problem definition Input: Given a set S of n points in the plane Output: The shortest cycle tour that visits each point in the set S. Better known as “TSP” How might you solve it? 13 nearest neighbor heuristic Start at some point p0 heuristic:" Walk first to its " A rule of thumb, simplifica- nearest neighbor p tion, or educated guess that 1 reduces or limits the search Walk to the nearest for solutions in domains that unvisited neighbor p2, are difficult and poorly then nearest unvisited understood. May be good, but usually not guaranteed to give p3, … until all points have been visited the best or fastest solution. Then walk back to p0 14 nearest neighbor heuristic p1! p0! p6! 15 an input where nn works badly length ~ 84 16! 4! 1! .9! 2! 8! 16 p0! an input where nn works badly optimal soln for this example" length ~ 64 16! 4! 1! .9! 2! 8! 17 p0! revised heuristic – closest pairs first Repeatedly join the closest pair of points (such that result can still be part of a " single loop in the end. I.e., join " endpoints, but not points in middle, " ? of path segments already created.) How does this work on our bad example? 16! 4! 1! .9! 2! 8! 18 p0! a bad example for closest pair 1! 1.5! 1.5! 19 a bad example for closest pair 1! 6+√10 = 9.16 ! 1.5! 1.5! vs ! 8! 20 something that works “Brute Force Search”: For each of the n! = n(n-1)(n-2)…1 orderings of the points, check the length of the cycle; Keep the best one 21 two notes The two incorrect algorithms were greedy Often very natural & tempting ideas They make choices that look great “locally” (and never reconsider them) When greed works, the algorithms are typically efficient BUT: often does not work - you get boxed in Our correct alg avoids this, but is incredibly slow 20! is so large that checking one billion per second would take 2.4 billion seconds (around 70 years!) And growing: n! ~ √ 2 π n • (n/e)n ~ 2O(n log n) 22 the morals of the story Algorithms are important Many performance gains outstrip Moore’s law Simple problems can be hard Factoring, TSP, many others Simple ideas don’t always work Nearest neighbor, closest pair heuristics Simple algorithms can be very slow Brute-force factoring, TSP A point we hope to make: for some problems, even the best algorithms are slow 23 my plan A brief overview of the theory of algorithms Efficiency & asymptotic analysis Some scattered examples of simple problems where clever algorithms help A brief overview of the theory of intractability Especially NP-complete problems “Basics every educated CSE student should know” 24 computational complexity The complexity of an algorithm associates a number T(n), the worst-case time the algorithm takes, with each problem size n. Mathematically, T: N+ → R+ i.e.,T is a function mapping positive integers (problem sizes) to positive real numbers (number of steps). 26 computational complexity T(n)! ! Time Problem size ! 27 computational complexity: general goals Characterize growth rate of worst-case run time as a function of problem size, up to a constant factor Why not try to be more precise? Average-case, e.g., is hard to define, analyze Technological variations (computer, compiler, OS, …) easily 10x or more Being more precise is a ton of work A key question is “scale up”: if I can afford this today, how much longer will it take when my business is 2x larger? (E.g. today: cn2, next year: c(2n)2 = 4cn2 : 4 x longer.) 28 computational complexity T(n)! 2n log2n! ! Time n log2n! Problem size ! 29 asymptotic analysis & big-O Given two functions f and g: N→R, f(n) is O(g(n)) iff ∃ constant c > 0 so that f(n) is eventually always ≤ c g(n) Example: 10n2-16n+100 is O(n2) (and also O(n3)…) why?: 10n2-16n+100 ≤ 11n2 for all n ≥ 10 30 polynomial vs exponential For all r > 1 (no matter how small) " and all d > 0, (no matter how large)" nd = O(rn). 1.01n n100 In short, every exponential grows faster than every polynomial! 31 the complexity class P: polynomial time P: Running time O(nd) for some constant d " (d is independent of the input size n) Nice scaling property: there is a constant c s.t. doubling n, time increases only by a factor of c." (E.g., c ~ 2d) Contrast with exponential: For any constant c, there is a d such that n → n+d increases time by a factor of more than c. (E.g., 2n vs 2n+1) 32 polynomial vs exponential growth 22n! 22n 2n/10! 2n/10 1000n2 1000n2! 33 why it matters not only get very big, but do so abruptly, which likely yields erratic performance on small instances 34 another view of poly vs exp Next year's computer will be 2x faster. If I can solve problem of size n0 today, how large a problem can I solve in the same time next year? Complexity Increase E.g. T=1012 12 12 O(n) n0 → 2n0 10 → 2 x 10 2 6 6 O(n ) n0 → √2 n0 10 → 1.4 x 10 3 3 4 4 O(n ) n0 → √2 n0 10 → 1.25 x 10 n /10 2 n0 → n0+10 400 → 410 n 2 n0 → n0 +1 40 → 41 35 complexity summary Typical initial goal for algorithm analysis is to find an asymptotic upper bound on worst case running time as a function of problem size This is rarely the last word, but often helps separate good algorithms from blatantly poor ones - concentrate on the good ones! 36 why “polynomial”? Point is not that n2000 is a nice time bound, or that the differences among n and 2n and n2 are negligible. Rather, simple theoretical tools may not easily capture such differences, whereas exponentials are qualitatively different from polynomials, so more amenable to theoretical analysis. “My problem is in P” is a starting point for a more detailed analysis “My problem is not in P” may suggest that you need to shift to a more tractable variant, or otherwise readjust expectations 37 algorithm design techniques 39 algorithm design techniques We will survey two: Later: Dynamic programming Orderly solution of many smaller sub-problems, typically non-disjoint Can give exponential speedups compared to more brute- force approaches Today: Divide & Conquer Reduce problem to one or more sub-problems of the same type, typically disjoint Often gives significant, usually polynomial, speedup 40 algorithm design techniques Divide & Conquer Reduce problem to one or more sub-problems of the same type Each sub-problem’s size a fraction of the original Subproblem’s typically disjoint Often gives significant, usually polynomial, speedup Examples: Mergesort, Binary Search, Strassen’s Algorithm, Quicksort (roughly) 41 divide & conquer – the key idea Suppose we've already invented DumbSort, taking time n2 Try Just One Level of divide & conquer: DumbSort(first n/2 elements) DumbSort(last n/2 elements) Merge results D&C in a 2 2 2 Time: 2 (n/2) + n = n /2 + n << n nutshell Almost twice as fast! 42 d&c approach, cont.