<<

and Computational : an Overview

Winter 2011 Larry Ruzzo

Thanks to Paul Beame, James Lee, Kevin Wayne for some slides 1 goals Design of Algorithms – a taste design methods common or important types of problems - efficiency

2 goals Complexity & intractability – a taste solving problems in principle is not enough algorithms must be efficient some problems have no efficient solution NP-complete problems important & useful class of problems whose solutions (seemingly) cannot be found efficiently

3 complexity example (e.g. RSA, SSL in browsers) Secret: p,q prime, say 512 bits each Public: n which equals p x q, 1024 bits In principle there is an that given n will find p and q: try all 2512 possible p’s, but an astronomical number In practice

no fast algorithm known for this problem (on non-quantum ) security of RSA depends on this fact (and research in “quantum ” is strongly driven by the possibility of changing this)

4 algorithms versus machines Moore’s Law and the exponential improvements in hardware...

Ex: sparse linear equations over 25 years

10 orders of magnitude improvement!

5 algorithms or hardware? 107 25 years G.E. / CDC 3600 progress solving CDC 6600 G.E. = 106 sparse linear CDC 7600 Cray 1 systems 105

Cray 2 Hardware 104 alone: 4 orders Seconds Cray 3 (Est.) 103 of magnitude

102

101

Source: Sandia, via M. Schultz 100 1960 1970 1980 1990 6 2000 algorithms or hardware? 107 25 years G.E. / CDC 3600 CDC 6600 G.E. = Gaussian Elimination progress solving SOR = Successive OverRelaxation 106 CG = Conjugate Gradient sparse linear CDC 7600 Cray 1 systems 105

Cray 2 Hardware 104 alone: 4 orders Seconds Cray 3 (Est.) 103 of magnitude Sparse G.E. Gauss-Seidel 102 Software alone: 101 SOR 6 orders of CG magnitude Source: Sandia, via M. Schultz 100 1960 1970 1980 1990 7 2000 algorithms or hardware? The N-Body Problem: in 30 years 107 hardware 1010 software

Source: T.Quinn

8 algorithms: a definition Procedure to accomplish a task or solve a well-specified problem Well-specified: know what all possible inputs look like and what output looks like given them “accomplish” via simple, well-defined steps Ex: names (via comparison) Ex: checking for primality (via +, -, *, /, ≤)

9 algorithms: a sample problem Printed circuit-board company has a robot arm that solders components to the board Time: proportional to total distance the arm must move from initial rest position around the board and back to the initial position For each board design, find best order to do the soldering

10

11 printed circuit board

12 more precise problem definition Input: Given a set S of n points in the plane Output: The shortest cycle tour that visits each point in the set S.

Better known as “TSP”

How might you solve it?

13 nearest neighbor heuristic

Start at some point p0 heuristic: Walk first to its A rule of thumb, simplifica- nearest neighbor p tion, or educated guess that 1 reduces or limits the search Walk to the nearest for solutions in domains that unvisited neighbor p2, are difficult and poorly then nearest unvisited understood. May be good, but usually not guaranteed to give p3, … until all points have been visited the best or fastest solution.

Then walk back to p0

14 nearest neighbor heuristic

p1 p0

p6

15 an input where nn works badly

length ~ 84

16 4 1 .9 2 8

16 p0 an input where nn works badly

optimal soln for this example length ~ 64

16 4 1 .9 2 8

17 p0 revised heuristic – closest pairs first Repeatedly join the closest pair of points (such that result can still be part of a single loop in the end. I.e., join endpoints, but not points in middle, ? of path segments already created.) How does this work on our bad example?

16 4 1 .9 2 8

18 p0 a bad example for closest pair

1

1.5 1.5

19 a bad example for closest pair

1 6+√10 = 9.16

1.5 1.5 vs

8

20 something that works

“Brute Force Search”: For each of the n! = n(n-1)(n-2)…1 orderings of the points, check the length of the cycle; Keep the best one

21 two notes The two incorrect algorithms were greedy Often very natural & tempting ideas They make choices that look great “locally” (and never reconsider them) When greed works, the algorithms are typically efficient BUT: often does not work - you get boxed in Our correct alg avoids this, but is incredibly slow 20! is so large that checking one billion per second would take 2.4 billion seconds (around 70 years!) And growing: n! ~ √ 2 π n • (n/e)n ~ 2O(n log n)

22 the morals of the story Algorithms are important Many performance gains outstrip Moore’s law Simple problems can be hard Factoring, TSP, many others Simple ideas don’t always work Nearest neighbor, closest pair heuristics Simple algorithms can be very slow Brute-force factoring, TSP A point we hope to make: for some problems, even the best algorithms are slow

23 my plan A brief overview of the theory of algorithms Efficiency & Some scattered examples of simple problems where clever algorithms help A brief overview of the theory of intractability Especially NP-complete problems

“Basics every educated CSE student should know”

24 The complexity of an algorithm associates a number T(n), the worst-case time the algorithm takes, with each problem size n.

Mathematically, T: N+ → R+ i.e.,T is a mapping positive integers (problem sizes) to positive real numbers (number of steps).

26 computational complexity

T(n)

Time

Problem size 27 computational complexity: general goals Characterize growth rate of worst-case run time as a function of problem size, up to a constant factor Why not try to be more precise? Average-case, e.g., is hard to define, analyze Technological variations (, , OS, …) easily 10x or more Being more precise is a ton of work A key question is “scale up”: if I can afford this today, how much longer will it take when my business is 2x larger? (E.g. today: cn2, next year: c(2n)2 = 4cn2 : 4 x longer.)

28 computational complexity

T(n)

2n log2n

Time

n log2n

Problem size 29 asymptotic analysis & big-O Given two functions f and g: N→R, f(n) is O(g(n)) iff ∃ constant c > 0 so that f(n) is eventually always ≤ c g(n)

Example: 10n2-16n+100 is O(n2) (and also O(n3)…)

why?: 10n2-16n+100 ≤ 11n2 for all n ≥ 10

30 polynomial vs exponential

For all r > 1 (no matter how small) and all d > 0, (no matter how large) nd = O(rn).

1.01n n100

In short, every exponential grows faster than every polynomial!

31 the P: polynomial time P: Running time O(nd) for some constant d (d is independent of the input size n) Nice scaling property: there is a constant c s.t. doubling n, time increases only by a factor of c. (E.g., c ~ 2d) Contrast with exponential: For any constant c, there is a d such that n → n+d increases time by a factor of more than c. (E.g., 2n vs 2n+1)

32

polynomial vs

22n 22n 2n/10 2n/10

1000n2 1000n2

33 why it matters

not only get very big, but do so abruptly, which likely yields erratic performance on small instances

34 another view of poly vs exp Next year's computer will be 2x faster. If I can solve problem of size n0 today, how large a problem can I solve in the same time next year?

Complexity Increase E.g. T=1012 12 12 O(n) n0 → 2n0 10 → 2 x 10 2 6 6 O(n ) n0 → √2 n0 10 → 1.4 x 10 3 3 4 4 O(n ) n0 → √2 n0 10 → 1.25 x 10 n /10 2 n0 → n0+10 400 → 410 n 2 n0 → n0 +1 40 → 41

35 complexity summary Typical initial goal for algorithm analysis is to find an asymptotic upper bound on worst case running time as a function of problem size This is rarely the last word, but often helps separate good algorithms from blatantly poor ones - concentrate on the good ones!

36 why “polynomial”? Point is not that n2000 is a nice time bound, or that the differences among n and 2n and n2 are negligible.

Rather, simple theoretical tools may not easily capture such differences, whereas exponentials are qualitatively different from polynomials, so more amenable to theoretical analysis. “My problem is in P” is a starting point for a more detailed analysis “My problem is not in P” may suggest that you need to shift to a more tractable variant, or otherwise readjust expectations

37 algorithm design techniques

39 algorithm design techniques We will survey two: Later: Dynamic programming Orderly solution of many smaller sub-problems, typically non-disjoint Can give exponential speedups compared to more brute- force approaches Today: Divide & Conquer Reduce problem to one or more sub-problems of the same type, typically disjoint Often gives significant, usually polynomial, speedup

40 algorithm design techniques Divide & Conquer Reduce problem to one or more sub-problems of the same type Each sub-problem’s size a fraction of the original Subproblem’s typically disjoint Often gives significant, usually polynomial, speedup Examples: Mergesort, Binary Search, Strassen’s Algorithm, (roughly)

41 divide & conquer – the key idea Suppose we've already invented DumbSort, taking time n2 Try Just One Level of divide & conquer: DumbSort(first n/2 elements) DumbSort(last n/2 elements) Merge results D&C in a 2 2 2 Time: 2 (n/2) + n = n /2 + n << n nutshell Almost twice as fast!

42 d&c approach, cont. Moral 1: “two halves are better than a whole” Two problems of half size are better than one full-size problem, even given O(n) overhead of recombining, since the base algorithm has super-linear complexity.

Moral 2: “If a little's good, then more's better” Two levels of D&C would be almost 4 times faster, 3 levels almost 8, etc., even though overhead is growing. In the limit: you’ve just rediscovered mergesort.

43 mergesort (review) Mergesort: (recursively) sort 2 half-lists, then merge results.

T(n) = 2T(n/2)+cn, n≥2 O(n) T(1) = 0 work per Solution: O(n log n) level Lognlevels

44 A Divide & Conquer Example: Closest Pair of Points

46 closest pair of points: non-geometric version Given n points and arbitrary distances between them, find the closest pair. (E.g., think of distance as airfare – definitely not Euclidean distance!)

n (… and all the rest of the (2 ) edges…)

Must look at all n choose 2 pairwise distances, else any one you didn’t check might be the shortest.

Also true for Euclidean distance in 1-2 dimensions? 47 closest pair of points: 1 dimensional version Given n points on the real line, find the closest pair

Closest pair is adjacent in ordered list Time O(n log n) to sort, if needed Plus O(n) to scan adjacent pairs Key point: do not need to calc distances between all pairs: exploit geometry + ordering

48 closest pair of points. 2d, Euclidean distance: 1st try Divide. Sub-divide region into 4 quadrants.

L

50 closest pair of points: 1st try Divide. Sub-divide region into 4 quadrants. Obstacle. Impossible to ensure n/4 points in each piece.

L

51 closest pair of points Algorithm. Divide: draw vertical line L with ≈ n/2 points on each side.

L

52 closest pair of points Algorithm. Divide: draw vertical line L with ≈ n/2 points on each side. Conquer: find closest pair on each side, recursively.

L

21

12

53 closest pair of points Algorithm. Divide: draw vertical line L with ≈ n/2 points on each side. Conquer: find closest pair on each side, recursively. Combine: find closest pair with one point in each side. Return best of 3 solutions. seems like Θ(n2) ? L

8 21

12

54 closest pair of points Find closest pair with one point in each side, assuming that distance < δ.

L

21

δ = min(12, 21) 12

55 closest pair of points Find closest pair with one point in each side, assuming that distance < δ. Observation: suffices to consider points within δ of line L.

L

21

δ = min(12, 21) 12

56 δ closest pair of points Find closest pair with one point in each side, assuming that distance < δ. Observation: suffices to consider points within δ of line L. Almost the one-D problem again: Sort points in 2δ-strip by their y coordinate.

L 7

6 5 4 21

δ = min(12, 21) 12 3

2 1 57 δ closest pair of points Find closest pair with one point in each side, assuming that distance < δ. Observation: suffices to consider points within δ of line L. Almost the one-D problem again: Sort points in 2δ-strip by their y coordinate. Only check pts within 8 in sorted list!

L 7

6 5 4 21

δ = min(12, 21) 12 3

2 1 58 δ closest pair of points

Def. Let si be the point in the th 2δ-strip, with the i smallest y- 39 j coordinate.

31

Claim. If |i – j| > 8, then the 30 29 ½δ distance between si and sj 28 ½δ is > δ. i 27 Pf: No two points lie in same 26 25 ½δ-by-½δ box; only 8 boxes

within δ 59 δ δ closest pair of points: analysis Number of pairwise distance calculations:

⎧ 0 n =1⎫ D(n) ≤ ⎨ ⎬ ⇒ D(n) = O(n logn) ⎩ 2D(n /2) + 7n n >1⎭

€ (A mostly superfluous detail: straightforward gives a running time that is a factor of log n larger, due to sorting in the various subproblems. Run time can be reduced to O(n log n) also, roughly by the trick of sorting by x at the top level, and returning/merging y-sorted lists from the subcalls. Regardless of this nuance, the big picture is the same: divide- and-conquer allows sharp speed gain over a naive n2 method.)

61 Integer Multiplication

63 integer Add. Given two n-digit 1 1 1 1 1 1 0 1 1 1 0 1 0 1 0 1 integers a and b, Add + 0 1 1 1 1 1 0 1 compute a + b. 1 0 1 0 1 0 0 1 0 O(n) bit operations.

1 1 0 1 0 1 0 1

* 0 1 1 1 1 1 0 1

Multiply. Given two n-digit 1 1 0 1 0 1 0 1 0 0 0 0 0 0 0 0 integers a and b, Multiply 1 1 0 1 0 1 0 1 compute a × b. 1 1 0 1 0 1 0 1 1 1 0 1 0 1 0 1 The “grade school” method: 1 1 0 1 0 1 0 1 Θ(n2) bit operations. 1 1 0 1 0 1 0 1

0 0 0 0 0 0 0 0

0 1 1 0 1 0 0 0 0 0 0 0 0 064 0 1 integer arithmetic Add. Given two n-digit 1 1 1 1 1 1 0 1 1 1 0 1 0 1 0 1 integers a and b, Add + 0 1 1 1 1 1 0 1 compute a + b. 1 0 1 0 1 0 0 1 0 O(n) bit operations.

1 1 0 1 0 1 0 1

* 0 1 1 1 1 1 0 1

Multiply. Given two n-digit 1 1 0 1 0 1 0 1 0 0 0 0 0 0 0 0 integers a and b, Multiply 1 1 0 1 0 1 0 1 compute a × b. 1 1 0 1 0 1 0 1 1 1 0 1 0 1 0 1 The “grade school” method: 1 1 0 1 0 1 0 1 Θ(n2) bit operations. 1 1 0 1 0 1 0 1

0 0 0 0 0 0 0 0

0 1 1 0 1 0 0 0 0 0 0 0 0 065 0 1 divide-and-conquer multiplication: warmup To multiply two 2-digit integers: Multiply four 1-digit integers. Add, shift some 2-digit integers to obtain result.

x 10 x x = ⋅ 1 + 0 4 5 y1 y0 y = 10⋅ y1 + y0 3 2 x1 x0 xy = (10⋅ x1 + x0) (10⋅ y1 + y0 ) 1 0 x0⋅y0 = 100 ⋅ x1y1 + 10⋅ (x1y0 + x0 y1) + x0 y0 0 8 x0⋅y1

1 5 x y Same idea works for long integers – 1⋅ 0 1 2 x ⋅y € can split them into 4 half-sized ints 1 1 1 4 4 0

66 divide-and-conquer multiplication: warmup To multiply two n-digit integers: Multiply four n/2-digit integers. Add, shift some n/2-digit integers to obtain result.

n / 2 x = 2 ⋅ x1 + x0 1 1 0 1 0 1 0 1 y1 y0 n / 2 y = 2 ⋅ y + y * 0 1 1 1 1 1 0 1 1 0 x1 x0 n / 2 n / 2 xy = 2 ⋅ x1 + x0 2 ⋅ y1 + y0 ( ) ( ) 0 1 0 0 0 0 0 1 x0⋅y0 n n / 2 = 2 ⋅ x1y1 + 2 ⋅ (x1y0 + x0 y1) + x0 y0 1 0 1 0 1 0 0 1 x0⋅y1

0 0 1 0 0 0 1 1 x1⋅y0

2 0 1 0 1 1 0 1 1 T(n) = 4T n/2 + Θ(n) ⇒ T(n) = Θ(n ) x1⋅y1  (  )   € recursive calls add, shift 0 1 1 0 1 0 0 0 0 0 0 0 0 0 0 1

€ assumes n is a power of 2 67 key trick: 2 multiplies for the price of 1

n / 2 x = 2 ⋅ x1 + x0 n / 2 Well, ok, 4 for 3 is y = 2 ⋅ y1 + y0 n / 2 n / 2 more accurate… xy = (2 ⋅ x1 + x0) (2 ⋅ y1 + y0 ) n n / 2 = 2 ⋅ x1y1 + 2 ⋅ (x1y0 + x0 y1) + x0 y0

α = x1 + x0

β = y1 + y0

αβ = (x1 + x0 ) (y1 + y0)

= x1y1 + (x1y0 + x0 y1) + x0 y0

(x1y0 + x0 y1) = αβ − x1y1 − x0 y0

68

€ Karatsuba multiplication To multiply two n-digit integers: Add two ½n digit integers. Multiply three ½n-digit integers. Add, subtract, and shift ½n-digit integers to obtain result.

n /2 x = 2 ⋅ x1 + x0 n /2 y = 2 ⋅ y1 + y0 n n /2 xy = 2 ⋅ x1 y1 + 2 ⋅(x1 y0 + x0 y1) + x0 y0 n x y n /2 x x y y x y x y x y = 2 ⋅ 1 1 + 2 ⋅( ( 1 + 0 )( 1 + 0 ) − 1 1 − 0 0 ) + 0 0 A B A C C Theorem. [Karatsuba-Ofman, 1962] Can multiply two n- digit€ integers in O(n1.585) bit ops.

T(n) ≤ 3 T n /2 + O(n)  (  )     recursive calls add, subtract, shift

log 3 ⇒ T(n) = O(n 2 ) = O(n1.585 ) 69

€ multiplication – the bottom line Naïve: Θ(n2) Karatsuba: Θ(n1.59…) Amusing exercise: generalize Karatsuba to do 5 size n/3 subproblems → Θ(n1.46…) Best known: Θ(n log n loglog n) "Fast Fourier Transform" but mostly unused in practice (unless you need really big numbers - a billion digits of π, say) High precision arithmetic IS important for crypto

71 d & c summary Idea: “Two halves are better than a whole” if the base algorithm has super-linear complexity. “If a little's good, then more's better” repeat above, recursively Applications: Many. Binary Search, , (Quicksort), Closest points, Integer multiply,…

73