
Coping With NP-Hardness Approximation Algorithms Suppose you need to solve NP-hard problem X. ■ Theory says you aren't likely to find a polynomial algorithm. ■ Should you just give up? ! Probably yes, if you're goal is really to find a polynomial algorithm. ! Probably no, if you're job depends on it. Princeton University • COS 423 • Theory of Algorithms • Spring 2001 • Kevin Wayne 2 Coping With NP-Hardness Approximation Algorithms and Schemes Brute-force algorithms. ρ-approximation algorithm. ■ Develop clever enumeration strategies. ■ An algorithm A for problem P that runs in polynomial time. ■ Guaranteed to find optimal solution. ■ For every problem instance, A outputs a feasible solution within ρ ■ No guarantees on running time. ratio of true optimum for that instance. Heuristics. Polynomial-time approximation scheme (PTAS). ■ A family of approximation algorithms {Aε : ε > 0} for a problem P. ■ Develop intuitive algorithms. ■ Aε is a (1 + ε) - approximation algorithm for P. ■ Guaranteed to run in polynomial time. ■ Aε is runs in time polynomial in input size for a fixed ε. ■ No guarantees on quality of solution. Approximation algorithms. Fully polynomial-time approximation scheme (FPTAS). ■ PTAS where Aε is runs in time polynomial in input size and 1 / ε . ■ Guaranteed to run in polynomial time. ■ Guaranteed to find "high quality" solution, say within 1% of optimum. ■ Obstacle: need to prove a solution's value is close to optimum, without even knowing what optimum value is! 3 4 Approximation Algorithms and Schemes Knapsack Problem Types of approximation algorithms. Knapsack problem. ■ Fully polynomial-time approximation scheme. ■ Given N objects and a "knapsack." ■ Constant factor. ■ Item i weighs wi > 0 Newtons and has value vi > 0. ■ Knapsack can carry weight up to W Newtons. ■ Goal: fill knapsack so as to maximize total value. Item Value Weight 1 1 1 Greedy = 35: { 5, 2, 1 } 2 6 2 vi / wi 3 18 5 OPT value = 40: { 3, 4 } 4 22 6 5 28 7 W = 11 5 6 Knapsack is NP-Hard Knapsack: Dynamic Programming Solution 1 KNAPSACK: Given a finite set X, nonnegative weights wi , nonnegative OPT(n, w) = max profit subset of items {1, . , n} with weight limit w. ⊆ values vi , a weight limit W, and a desired value V, is there a subset S ■ Case 1: OPT selects item n. X such that: ≤ – new weight limit = w –w ∑ wi W n i∈S – OPT selects best of {1, 2, . , n – 1} using this new weight limit ∑ v ≥ V i ■ Case 2: OPT does not select item n. i∈S – OPT selects best of {1, 2, . , n – 1} using weight limit w SUBSET-SUM: Given a finite set X, nonnegative values u , and an i 0 if n = 0 integer t, is there a subset S ⊆ X whose elements sum to t? = − > OPT(n,w) OPT(n 1,w) if wn w ≤ {}− + − − Claim. SUBSET-SUM P KNAPSACK. max OPT(n 1,w), vn OPT(n 1,w wn ) otherwise Proof: Given instance (X, t) of SUBSET-SUM, create KNAPSACK instance: ∑ u ≤ t ■ v =w = u i i i i i∈S Directly leads to O(N W) time algorithm. ■ V = W = t ∑ u ≥ t i ■ W = weight limit. i∈S ■ Not polynomial in input size! 7 8 Knapsack: Dynamic Programming Solution 2 Knapsack: Bottom-Up OPT(n, v) = min knapsack weight that yields value exactly v using subset of items {1, . , n}. Bottom-Up Knapsack ■ Case 1: OPT selects item n. INPUT: N, W, w1,…,wN, v1,…,vN – new value needed = v –vn – OPT selects best of {1, 2, . , n – 1} using new value ARRAY: OPT[0..N, 0..V*] ■ Case 2: OPT does not select item n. FOR v = 0 to V – OPT selects best of {1, 2, . , n – 1} that achieves value v OPT[0, v] = 0 0 if n = 0 FOR n = 1 to N FOR w = 1 to W OPT(n,v) = OPT(n − 1, v) if v > v n IF (v > v) {}− + − − n min OPT(n 1, v), wn OPT(n 1,v vn ) otherwise OPT[n, v] = OPT[n-1, v] ELSE OPT[n, v] = min {OPT[n-1, v], wn + OPT[n-1, v-vn ]} Directly leads to O(N V *) time algorithm. ■ V* = optimal value. v* = max {v : OPT[N, v] ≤ W} RETURN OPT[N, v*] ■ Not polynomial in input size! 9 10 Knapsack: FPTAS Knapsack: FPTAS Intuition for approximation algorithm. Knapsack FPTAS. vn ■ Round all values down to lie in smaller range. ■ Round all values: v = n ■ Run O(N V*) dynamic programming algorithm on rounded instance. ■ Return optimal items in rounded instance. – V = largest value in original instance – ε = precision parameter – θ = scaling factor = ε V / N Item Value Weight Item Value Weight 1 134,221 1 1 1 1 ■ Bound on optimal value V *: 2 656,342 2 2 6 2 ≤ ≤ ≤ 3 1,810,013 5 3 18 5 V V * N V assume wn W for all n 4 22,217,800 6 4 222 6 5 28,343,199 7 5 283 7 Running Time W = 11 W = 11 O(N V * ) ∈ O( N (N V ) ) V = largest value in rounded instance ∈ 2 θ O( N (V / ) ) V * = optimal value in rounded instance ∈ 3 1 Original Instance Rounded Instance O( N ε ) 11 12 Knapsack: FPTAS Knapsack: State of the Art Knapsack FPTAS. This lecture. v ■ Round all values: = n ■ ε vn "Rounding and scaling" method finds a solution within a (1 - ) factor of optimum for any ε > 0. 3 – V = largest value in original instance ■ Takes O(N / ε) time and space. – ε = precision parameter – θ = scaling factor = ε V / N Ibarra-Kim (1975), Lawler (1979). Proof of Correctness 4 ■ Faster FPTAS: O(N log (1 / ε) + 1 / ε ) time. ■ ∑ ≥ ∑ Bound on optimal value V *: vn vn ■ Idea: group items by value into "large" and "small" classes. n∈S* n∈S* ≥ ∑ – run dynamic programming algorithm only on large items V ≤ V * ≤ N V vn n∈S* – insert small items according to ratio vn / wn ≥ ∑(vn - – clever analysis S * = opt set of items in original instance n∈S* ≥ 1 S * = opt set of items in rounded instance ∑vn - n∈S* = V * − (ε V / N ) N ≥ (1 − ε )V * 13 14 Approximation Algorithms and Schemes Traveling Salesperson Problem Types of approximation algorithms. TSP: Given a graph G = (V, E), nonnegative edge weights c(e), and an integer C, is there a Hamiltonian cycle whose total cost is at most C? ■ Fully polynomial-time approximation scheme. ■ Constant factor. Is there a tour of length at most 1570? 15 16 Traveling Salesperson Problem Hamiltonian Cycle Reduces to TSP TSP: Given a graph G = (V, E), nonnegative edge weights c(e), and an HAM-CYCLE: given an undirected graph G = (V, E), does there exists integer C, is there a Hamiltonian cycle whose total cost is at most C? a simple cycle C that contains every vertex in V. TSP: Given a complete (undirected) graph G, integer edge weights c(e) ≥ 0, and an integer C, is there a Hamiltonian cycle whose total cost is at most C? 2 Claim. HAM-CYCLE is NP-complete. a b a b 1 c d c d G G' Proof. (HAM-CYCLE transforms to TSP) ■ Given G = (V, E), we want to decide if it is Hamiltonian. ■ Create instance of TSP with G' = complete graph. ■ Set c(e) = 1 if e ∈ E, and c(e) = 2 if e ∉ E, and choose C = |V|. ■ Γ Hamiltonian cycle in G ⇔Γhas cost exactly |V| in G'. Is there a tour of length at most 1570? Yes, red tour = 1565. Γ not Hamiltonian in G ⇔Γhas cost at least |V| + 1 in G'. 17 18 TSP TSP Heuristic TSP-OPT: Given a complete (undirected) graph G = (V, E) with integer APPROX-TSP(G, c) edge weights c(e) ≥ 0, find a Hamiltonian cycle of minimum cost? ■ Find a minimum spanning tree T for (G, c). Claim. If P ≠ NP, there is no ρ-approximation for TSP for any ρ≥1 . Proof (by contradiction). ■ Suppose A is ρ-approximation algorithm for TSP. ■ We show how to solve instance G of HAM-CYCLE. a d a d ■ Create instance of TSP with G' = complete graph. e e ■ Let C = |V|, c(e) = 1 if e ∈ E, and c(e) = ρ |V | + 1 if e ∉ E. b f g b f g ■ Γ Hamiltonian cycle in G ⇔Γhas cost exactly |V| in G' Γ not Hamiltonian in G ⇔Γhas cost more than ρ |V| in G' c c ⇒ ■ Gap If G has Hamiltonian cycle, then A must return it. h h Input MST (assume Euclidean distances) 19 20 TSP Heuristic TSP Heuristic APPROX-TSP(G, c) APPROX-TSP(G, c) ■ Find a minimum spanning tree T for (G, c). ■ Find a minimum spanning tree T for (G, c). ■ W ← ordered list of vertices in preorder walk of T. ■ W ← ordered list of vertices in preorder walk of T. ■ H ← cycle that visits the vertices in the order L. ■ H ← cycle that visits the vertices in the order L. a d a d a d a d e e e e b f g b f g b f g b f g c c c c h h h h Preorder Traversal Full Walk W Hamiltonian Cycle H An Optimal Tour: 14.715 Hamiltonian Cycle H: 19.074 a b c b h b a d e f e g e d a a b c h d e f g a (assuming Euclidean distances) 21 22 TSP With Triangle Inequality TSP With Triangle Inequality ∆-TSP: TSP where costs satisfy ∆-inequality: u w Theorem.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages11 Page
-
File Size-