Lecture 12: Knapsack Problems
Total Page:16
File Type:pdf, Size:1020Kb
Lecture 12: Knapsack Problems Prof. Krishna R. Pattipati Dept. of Electrical and Computer Engineering University of Connecticut Contact: [email protected]; (860) 486-2890 © K. R. Pattipati, 2001-2016 Outline • Why solve this problem? • Various versions of Knapsack problem • Approximation algorithms • Optimal algorithms . Dynamic programming . Branch-and-bound VUGRAPH 2 0–1 Knapsack problem • A hitch-hiker has to fill up his knapsack of size V by selecting from among various possible objects those which will give him maximum comfort • Suppose want to invest (all or in part) a capital of V dollars among n possible investments with different expected profits and with different investment requirements to maximize total expected profit • Other applications: cargo loading, cutting stock • Mathematical formulation of 0–1 Knapsack problem n max pxii i1 n s.t. wii x V i1 xi {0,1} • A knapsack is to be filled with different objects of profit pi and of weights wi without exceeding the total weight V • pi & wi are integers (if not, scale them) • wi ≤ 푉, ∀푖 푛 • σ푖=1 푤푖 > 푉 VUGRAPH 3 Other related formulations • Bounded Knapsack problem . Suppose there are bi items of type i ⇒ Change 푥푖 ∈ {0, 1} to 0 ≤ 푥푖 ≤ 푏푖 and xi integer . Unbounded Knapsack problem ⇒ 푏푖 = ∞, 0 ≤ 푥푖 ≤ ∞, xi integer . Subset-sum problem o Find a subset of weights whose sum is closest to, without exceeding capacity n max wxii i1 n s.t. wii x V i1 xi {0,1} o Knapsack problem with wi = pi o Subset-sum is NP-hard ⇒ knapsack is NP-hard . Change-making problem n max xi i1 n s.t. wii x V i1 0xi b i , x i an integer, i 1, , n o bi finite ⇒ bounded change-making problem o bi = ∞ ⇒ unbounded change-making problem VUGRAPH 4 Other related formulation • 1-dimensional knapsack with a cost constraint • Multi-dimensional knapsack (m knapsacks) n nm max px max pxii i ij i1 ij11 n n s.t. wii x V s.t. wi x ij V; j 1, , m i1 i1 n m cii x C xij 1; i 1, , n i1 j1 x {0,1} i xij {0,1}; i 1, , n ; j 1, , m • Multi-dimensional knapsack in which profit and weight of each item depends on the knapsack selected for the item nm max pxij ij ij11 n s.t. wij x ij V j ; j 1, , m i1 m xij 1; i 1, , n j1 xij {0,1}; i 1, , n ; j 1, , m VUGRAPH 5 Other related formulation • Loading problem or variable-sized bin-packing problem . Given n objects with known volumes wi & m boxes with limited capacity cj, j=1,…,m, minimize the number of boxes used . yi = 1 if box j is used . xij = 1 if object i is put in box j n min y j j1 m s.t. xij 1; i 1, , n i1 n wi x ij c j y j ; j 1, , m i1 yxj, ij {0,1} . cj = c ⇒ bin-packing problem • See S. Martello and P. Toth, Knapsack Problems: Algorithms and Computer Implementation, John Wiley, 1990 for an in-depth survey of these problems • Here we consider only 0–1 Knapsack problem VUGRAPH 6 Relaxed LP version of Knapsack problem • Let us consider relaxed LP version of Knapsack problem . Gives us an upper and lower bound on the Knapsack problem . Assume that objects are ordered as p p p 12 n w12 w wn . LP relaxation to find an upper bound maxpi x i relax 0 1 max p i x i ii s.t. wi x i V s.t. w i x i V ii xxii{0,1} constraints 0 1 * f fLP . Dual of relaxed LP n minV max(0, pii w ) 0 i1 n n min V min V rr1 i i * pr1 i1 i1 ;:r wii V w wr1 ii11 s.t. wp s.t. ipw i i i i i p 0 * r1 , i 0 i imax(0,pw i i ) wr1 VUGRAPH 7 Relaxed LP version of Knapsack problem • Complementary slackness condition . 푥푖 = 1 ⇒휇푖 > 0 and 푤푖휆 + 휇푖 = 푝푖 . 푥푖 < 1 ⇒ 휇푖 = 0 • Optimal solution is as follows rr1 if wii V w then ii11 xi1, i 1, , r i = ( p i w i ) xii0, i r 2, , n = 0 r Vw i i1 xrr11 ; = 0 wr1 one upper bound is: rr ** pr1 f fLP p i V w i ii11wr1 p optimal r1 wr1 wi i p i p r1 ; i 1, , r wr1 ∗ • Clearly, 푓퐿푃 = optimal dual objective function value VUGRAPH 8 LP relaxation upper bound ∗ ∗ • LP relaxation solution provides the upper bound 푈1 = int 푓퐿푃 ≤ 2푓 . Both [p1, p2,…, pr] and pr+1 are feasible Knapsack solutions . Feasible solution ≤ 푓∗ ∗ . 푈1 ≤ sum of two feasible solution ≤ 2푓 • We can obtain an even tighter bound than LP relaxation (Martello and Toth) . Consider the possibility that xr+1 = 1 or xr+1 = 0 xxrr120 could be a fraction r Vw r i ˆ i1 U pir p 2 i1 wr2 xxrr1 1 could be a fraction r1 V w w r1 ir1 i1 U pi p r1 p r i1 wr r w V w r rr1 i1 pi p r1 p r i1 wr VUGRAPH 11 LP relaxation upper bound • Clearly, ˆ * U2 max U , U f • Why is this a better upper bound than LP relaxation? . Clearly, 푈 ≤ 푈1 . Can show that r1 pprr1 U1 U wi V 0 wwrr1 i1 VUGRAPH 12 Knapsack Example 푉 − σ푟 푤 . n = 8 푖=1 푖 푈 = 푝푟+2 . p = [15, 100, 90, 60, 40, 15, 10, 1] 푤푟+2 . w = [2, 20, 20, 30, 40, 30, 60, 10] 푟−1 푉 − σ푖=1 푤푖 − 푤푟+1 . V = 102 푈෩ = 푝 푤 푟 . Optimal solution: x = [1, 1, 1, 1, 0, 1, 0, 0] 푟 . Optimal value: f* = 280 (30)(40) . LP relaxation: (r + 1) = 5 ⇒ U = 265 + = 295 1 40 (30)(15) . Bound 푈 = 265 + = 280 30 (20)(60) . Bound 푈෩ = 245+ = 285 30 . Maximum of latter two bounds = 285 • These bounds can be further improved by solving two continuous relaxed LPs with the constraints that xr+1 = 1 or xr+1 = 0, respectively • Algorithm for solving Knapsack problem . Approximation algorithms . Dynamic programming . Branch-and-bound • Although the problem is NP-hard, can solve problems of size n = 100,000 in a reasonable time • Approximate algorithms…lower bounds on the optimal solution . Can obtain from a greedy heuristic VUGRAPH 13 Greedy heuristic • 0th order greedy heuristic 푝 . Load objects on the basis of 푖 푤푖 * L(0) pi f pp1 i: ii wwii1 &wVi • Kth order greedy heuristic . We can obtain a series of lower bounds by assuming that a certain set of objects J are in the knapsack where wVi iJ . And assigning the rest on the basis of greedy method . This is basically rollout concept of dynamic programming VUGRAPH 14 kth order Greedy heuristic . The bound can be computed as follows: o 푧 = 푉 − σ푖∈퐽 푤푖 o y=σ푖∈퐽 푝푖 o For 푖 = 1, … , 푛 do If (푖 ∉ 퐽 & 푤푖 ≤ 푧) then 퐽 = 퐽 ∪ 푖 푧 = 푧 − 푤푖 푦 = 푦 + 푝푖 End if o End do o 퐿(퐽) = 푦 ≤ 푓∗ VUGRAPH 15 k-approximation algorithm • Idea: what if I consider all possible combination of objects of size |J| = k & find the best lower bound & use it as a solution of the Knapsack problem . If |J| = n, optimal o We can control the complexity of the algorithm by varying k k-approximation algorithm…Knapsack(k) o f (k) ← 0 Enumerate all subset 퐽 ⊂ {1, … , 푛} such that |JV | and do iJ f( k ) max f ( k ), L ( J ) . Note: in practice, k = 2 would suffice to produce a solution within 2-5% of optimal • Example max 100x1 50 x 2 20 x 3 10 x 4 7 x 5 3 x 6 s.t. 100x1 50 x 2 20 x 3 10 x 4 7 x 5 3 x 6 165 p i 1, i wi L(0) 163 0th order heuristic k1: L ({1}) 163, L ({2}) 163, L ({3}) 140, L ({4}) 163, L ({5}) 160, L ({6}) 163 best 1st order heuristic solution = 163... in fact, this is optimal! VUGRAPH 16 k-approximation algorithm • Can we say anything about the performance of the approximation algorithm? Yes! . The recursive approximation algorithm Knapsack(k) provides a solution 푓(푘) ∋ f* f( k ) 1 fk* 1 . Before we provide a proof, let us consider its implications o If want a solution within 휖 of optimal 11 k 1 k 1 o Time complexity Number of time greedy (0) is executed kkk1 1 n ikn 1 n O()() n O n ii00i n 1 Each greedy takes O(n) operations VUGRAPH 17 Proof of the bound: setting the stage • Let R* be the set of objects included in the knapsack in an optimal solution * pii f& w V i R** i R • If | R*|≤ 푘, our approximation algorithm would have found it…so, assume otherwise Rk* ∗ * • Suppose (푝Ƹ푖, 푤ෝ푖), 1 ≤ 푖 ≤ |푅 | be the set of objects in R • Assume that we order these objects as follows: . First 푘: 푝Ƹ1 > 푝Ƹ2 > ⋯ > 푝Ƹ푘 are the largest profits 푝ො 푝ො . The rest: 푖 > 푖+1 , 푘 < 푖 < |푅∗| 푤ෝ푖 푤ෝ푖+1 pˆ p ˆ p ˆ f * • 12 ||R* k **ˆ ˆ ˆ • f pi p k1 ( k 1) p k t ; t k 1,...,| R | i1 f * pˆ kt k 1 VUGRAPH 18 Proof of the k-approximation bound - 1 • Let us look at what k-approximation algorithm does … it must have looked at the set J of largest prices in R* at least once … let k ˆ fJ p i p i i J i 1 • Let us consider what happens when k-approximation algorithm executed this iteration .