<<

Lecture 12: Knapsack Problems

Prof. Krishna R. Pattipati Dept. of Electrical and Computer Engineering University of Connecticut Contact: [email protected]; (860) 486-2890

© K. R. Pattipati, 2001-2016 Outline • Why solve this problem? • Various versions of Knapsack problem • Approximation • Optimal algorithms . . Branch-and-bound

VUGRAPH 2 0–1 Knapsack problem

• A hitch-hiker has to fill up his knapsack of size V by selecting from among various possible objects those which will give him maximum comfort • Suppose want to invest (all or in part) a capital of V dollars among n possible investments with different expected profits and with different investment requirements to maximize total expected profit • Other applications: cargo loading, cutting stock • Mathematical formulation of 0–1 Knapsack problem n max  pxii i1 n s.t. wii x V i1

xi  {0,1}

• A knapsack is to be filled with different objects of profit pi and of weights wi without exceeding the total weight V

• pi & wi are integers (if not, scale them)

• wi ≤ 푉, ∀푖 푛 • σ푖=1 푤푖 > 푉

VUGRAPH 3 Other related formulations

• Bounded Knapsack problem

. Suppose there are bi items of type i

⇒ Change 푥푖 ∈ {0, 1} to 0 ≤ 푥푖 ≤ 푏푖 and xi integer . Unbounded Knapsack problem

⇒ 푏푖 = ∞, 0 ≤ 푥푖 ≤ ∞, xi integer . Subset-sum problem o Find a subset of weights whose sum is closest to, without exceeding capacity n max wxii i1 n s.t. wii x V i1

xi  {0,1} o Knapsack problem with wi = pi o Subset-sum is NP-hard ⇒ knapsack is NP-hard

. Change-making problem n max  xi i1 n s.t. wii x V i1

0xi  b i , x i an integer, i  1, , n

o bi finite ⇒ bounded change-making problem

o bi = ∞ ⇒ unbounded change-making problem

VUGRAPH 4 Other related formulation

• 1-dimensional knapsack with a cost constraint • Multi-dimensional knapsack (m knapsacks)

n nm max px max  pxii  i ij i1 ij11 n n s.t. wii x V s.t.  wi x ij  V ; j 1, , m i1 i1 n m cii x C  xij  1; i 1, , n i1 j1 x  {0,1} i xij  {0,1}; i  1, , n ; j  1, , m

• Multi-dimensional knapsack in which profit and weight of each item depends on the knapsack selected for the item nm max  pxij ij ij11 n s.t.  wij x ij V j ; j 1, , m i1 m  xij  1; i 1, , n j1

xij  {0,1}; i  1, , n ; j  1, , m

VUGRAPH 5 Other related formulation

• Loading problem or variable-sized bin-packing problem

. Given n objects with known volumes wi & m boxes with limited capacity cj, j=1,…,m, minimize the number of boxes used

. yi = 1 if box j is used

. xij = 1 if object i is put in box j

n min  y j j1 m s.t.  xij  1; i 1, , n i1 n wi x ij c j y j ; j 1, , m i1

yxj , ij  {0,1}

. cj = c ⇒ bin-packing problem • See S. Martello and P. Toth, Knapsack Problems: Algorithms and Computer Implementation, John Wiley, 1990 for an in-depth survey of these problems • Here we consider only 0–1 Knapsack problem

VUGRAPH 6 Relaxed LP version of Knapsack problem • Let us consider relaxed LP version of Knapsack problem . Gives us an upper and lower bound on the Knapsack problem . Assume that objects are ordered as p p p 12   n w12 w wn . LP relaxation to find an upper bound

maxpi x i relax 0 1 max p i x i ii

s.t. wi x i V  s.t. w i x i  V ii

xxii {0,1} constraints 0   1 * f  fLP

. Dual of relaxed LP n minV max(0, pii w ) 0  i1 n n min V  min V  rr1  i  i * pr1 i1 i1  ;:r wii  V  w wr1 ii11 s.t. wp s.t. ipw i i i i i p   0 * r1  , i  0 i imax(0,pw i i ) wr1

VUGRAPH 7 Relaxed LP version of Knapsack problem

• Complementary slackness condition

. 푥푖 = 1 ⇒휇푖 > 0 and 푤푖휆 + 휇푖 = 푝푖

. 푥푖 < 1 ⇒ 휇푖 = 0 • Optimal solution is as follows rr1 if wii V w then ii11

xi 1, i  1, , r  i = ( p i  w i )

xii 0, i  r  2, , n   = 0 r Vw  i i1 xrr11 ; = 0 wr1  one upper bound is:

rr ** pr1 f fLP  p i  V  w i  ii11wr1 p optimal   r1 wr1

wi i p i  p r1 ; i  1, , r wr1 ∗ • Clearly, 푓퐿푃 = optimal dual objective function value

VUGRAPH 8 LP relaxation upper bound

∗ ∗ • LP relaxation solution provides the upper bound 푈1 = int 푓퐿푃 ≤ 2푓

. Both [p1, p2,…, pr] and pr+1 are feasible Knapsack solutions . Feasible solution ≤ 푓∗ ∗ . 푈1 ≤ sum of two feasible solution ≤ 2푓 • We can obtain an even tighter bound than LP relaxation (Martello and Toth)

. Consider the possibility that xr+1 = 1 or xr+1 = 0

xxrr120 could be a fraction r Vw r  i ˆ i1 U  pir   p 2  i1 wr2  

xxrr1 1 could be a fraction r1 V w w r1  ir1 i1 U  pi  p r1  p r i1 wr   r w V w r rr1  i1  pi  p r1  p r i1 wr  

VUGRAPH 11 LP relaxation upper bound

• Clearly,

ˆ * U2 max U , U f

• Why is this a better upper bound than LP relaxation? ෡ . Clearly, 푈 ≤ 푈1 . Can show that

r1 pprr1  U1  U   wi  V  0 wwrr1 i1

VUGRAPH 12 Knapsack Example 푉 − σ푟 푤 . n = 8 푖=1 푖 푈෡ = 푝푟+2 . p = [15, 100, 90, 60, 40, 15, 10, 1] 푤푟+2 . w = [2, 20, 20, 30, 40, 30, 60, 10] 푟−1 푉 − σ푖=1 푤푖 − 푤푟+1 . V = 102 푈෩ = 푝 푤 푟 . Optimal solution: x = [1, 1, 1, 1, 0, 1, 0, 0] 푟 . Optimal value: f* = 280 (30)(40) . LP relaxation: (r + 1) = 5 ⇒ U = 265 + = 295 1 40 (30)(15) . Bound 푈෡ = 265 + = 280 30 (20)(60) . Bound 푈෩ = 245+ = 285 30 . Maximum of latter two bounds = 285 • These bounds can be further improved by solving two continuous relaxed LPs with the constraints that xr+1 = 1 or xr+1 = 0, respectively • for solving Knapsack problem . Approximation algorithms . Dynamic programming . Branch-and-bound • Although the problem is NP-hard, can solve problems of size n = 100,000 in a reasonable time • Approximate algorithms…lower bounds on the optimal solution . Can obtain from a greedy heuristic

VUGRAPH 13 Greedy heuristic

• 0th order greedy heuristic 푝 . Load objects on the basis of 푖 푤푖

* L(0)  pi f pp1 i: ii wwii1

&wVi 

• Kth order greedy heuristic . We can obtain a series of lower bounds by assuming that a certain set of objects J are in the knapsack where

wVi  iJ . And assigning the rest on the basis of greedy method . This is basically rollout concept of dynamic programming

VUGRAPH 14 kth order Greedy heuristic

. The bound can be computed as follows:

o 푧 = 푉 − σ푖∈퐽 푤푖

o y=σ푖∈퐽 푝푖 o For 푖 = 1, … , 푛 do

 If (푖 ∉ 퐽 & 푤푖 ≤ 푧) then  퐽 = 퐽 ∪ 푖

 푧 = 푧 − 푤푖  푦 = 푦 + 푝푖  End if o End do o 퐿(퐽) = 푦 ≤ 푓∗

VUGRAPH 15 k-

• Idea: what if I consider all possible combination of objects of size |J| = k & find the best lower bound & use it as a solution of the Knapsack problem . If |J| = n, optimal o We can control the complexity of the algorithm by varying k k-approximation algorithm…Knapsack(k) o f (k) ← 0  Enumerate all subset 퐽 ⊂ {1, … , 푛} such that |JV | and do iJ f( k ) max f ( k ), L ( J ) . Note: in practice, k = 2 would suffice to produce a solution within 2-5% of optimal • Example

max 100x1 50 x 2  20 x 3  10 x 4  7 x 5  3 x 6

s.t. 100x1 50 x 2  20 x 3  10 x 4  7 x 5  3 x 6  165 p i  1, i wi L(0) 163 0th order heuristic k 1: L ({1})  163, L ({2})  163, L ({3})  140, L ({4})  163, L ({5})  160, L ({6})  163  best 1st order heuristic solution = 163... in fact, this is optimal!

VUGRAPH 16 k-approximation algorithm

• Can we say anything about the performance of the approximation algorithm? Yes! . The recursive approximation algorithm Knapsack(k) provides a solution 푓(푘) ∋

f*  f( k ) 1  fk* 1

. Before we provide a proof, let us consider its implications o If want a solution within 휖 of optimal 11  k  1 k 1 o Time complexity  Number of time greedy (0) is executed

kkk1 1 n ikn 1 n   O()() n  O n ii00i n 1  Each greedy takes O(n) operations

VUGRAPH 17 Proof of the bound: setting the stage

• Let R* be the set of objects included in the knapsack in an optimal solution * pii  f& w  V i R** i R • If | R*|≤ 푘, our approximation algorithm would have found it…so, assume otherwise Rk*

∗ * • Suppose (푝Ƹ푖, 푤ෝ푖), 1 ≤ 푖 ≤ |푅 | be the set of objects in R • Assume that we order these objects as follows:

. First 푘: 푝Ƹ1 > 푝Ƹ2 > ⋯ > 푝Ƹ푘 are the largest profits 푝ො 푝ො . The rest: 푖 > 푖+1 , 푘 < 푖 < |푅∗| 푤ෝ푖 푤ෝ푖+1 pˆ p ˆ  p ˆ  f * • 12 ||R* k **ˆ ˆ ˆ • f pi  p k1 ( k  1) p k t ; t  k  1,...,| R | i1 f * pˆ kt k 1

VUGRAPH 18 Proof of the k-approximation bound - 1

• Let us look at what k-approximation algorithm does … it must have looked at the set J of largest prices in R* at least once … let k ˆ fJ p i p i i J i 1 • Let us consider what happens when k-approximation algorithm executed this iteration . Suppose that the approximation algorithm did not include an object that is in the optimal solution . Let l be the first such object, that is, 푙 ∉ 퐽

. Suppose l corresponds to (푝Ƹ푚, 푤ෝ푚) in R* • Why was l not included in the Knapsack(k)?

. Residual capacity of knapsack 푧 < 푤푙 = 푤ෝ푚 . Must have included (m - 1) tasks

S k m11pˆ  m ˆ ˆm ˆ f() k pi   p i  V   w i  z i1 i  k  1wˆ m  i  1

fJ  k ˆ f() k pi S i1

fJ

VUGRAPH 19 Proof of the k-approximation bound - 2

• Also, from LP relaxation bound R* mm11pˆ  ˆ ˆm ˆ pi  p i  V   w i i m i 11wˆ m  i  • By definition

R* * f fJi p ik1 * m1 R ˆˆ fJ  p i  p i i k 1 i  m ppˆˆm1 mm ˆ fJi  S   V   w wwˆˆmmi1

pˆ m fJ  S  z wˆ m

fJ  S  pˆˆ m  f() k  p m f*  f( k )pˆ 1  m  f** f k 1

VUGRAPH 20 Proof of the k-approximation bound - 3

st • Since 푝Ƹ푚 is one of 푝Ƹ푘+1 ⋯ 푝Ƹ 푅∗ ⇒ 푝Ƹ푚 ≤ 푝ҧ = 푘 + 1 largest element of 푝Ƹ1 ⋯ 푝Ƹ푚 f*  f() k pˆˆ p p  mm   f** f f()() k f k f*  f( k ) 1 p * min , f k1 f ( k ) • Example . p = [11, 21, 31, 33, 43, 53, 55, 65] optimal: 1 2 3 5 6

. w = [1, 11, 21, 23, 33, 43, 45, 55] f* = 159, σ 푤푖 = 109 < 110 . V = 110 푓∗−푓 20 . k = 0 ⇒ x = [1, 1, 1, 1, 1, 0, 0, 0], f = 139 ⇒ = = 0.126, σ 푤 = 89 푓∗ 159 푖 푓∗−푓 8 . k = 1 ⇒ x = [1, 1, 1, 1, 0, 0, 1, 0], f = 151 ⇒ = = 0.05, σ 푤 =101 푓∗ 159 푖 푓∗−푓 0 . k = 2 ⇒ x = [1, 1, 1, 0, 1, 1, 0, 0], f = 159 ⇒ = = 0, σ 푤 =109 푓∗ 159 푖

VUGRAPH 21 Example

max 9x1 5 x 2  3 x 3  x 4

s.t. 7x1 4 x 2  3 x 3  2 x 4  10 9 5 3 1 note:    (OK) 7 4 3 2 k0 : L ({0})  12, x  [1,0,1,0] k1: L ({1})  12, L ({2})  9, L ({3})  12, L ({4})  10 k2 : L ({1,3})  12, L ({1,4})  10, L ({2,3})  9, L ({2,4})  10, L ({3,4})  10,etc. f (2) 12 f *

. Usually k = 1 or k = 2 works OK (within 2-5% of optimal)

VUGRAPH 22 Dynamic programming approach

• Views the problem as a sequence of decisions related to variables x1, x2,…, xn

• That is, we decide whether x1 = 0 or 1, then consider x2, and so on • Consider the “generalized” Knapsack problem:

l  max  pxii  ik  l  s.t. wii x U Knap( k , l , U ) ik  x {0,1}, i k , , l i  

• Actually want to solve: Knap (1, n, V) • To solve Knap (1, n, V), the DP algorithm employs the principle of optimality . “The optimal sequence of decisions has the property that, whatever the initial state and past decisions are, the remaining decisions must constitute an optimal decision sequence with regard to the state resulting from the current decision”

VUGRAPH 23 Backward DP approach

∗ • Suppose 푓0 (푉) = optimal value for Knap (1, n, V) ∗ • 푓푗 (푈) = optimal value for Knap (j + 1, n, V), 1 ≤ 푗 ≤ 푛 − 1 ∗ • Clearly, 푓푛 푈 = 0, ∀푈 ∗ • Let us look at 푓0 (푉) … suppose we make the decision for xi ∗ • If 푥1 = 0, 푥2 ⋯ 푥푛 must be the optimal solution for Knap (2, n, V) = 푓1 (푉) ∗ • If 푥1 = 1, 푥2 ⋯ 푥푛 must be the optimal solution for Knap (2, n, V – w1) = 푓1 (푉 − 푤1)

*** f0( V ) max f 1 ( V ), f 1 ( V  w 1 )  p 1 

xx11 0 1

∗ • Similarly, 푓푗 (푈) = optimal solution of Knap (j + 1, n, U)

*** fj()max U f j1 (), U f j  1 ( U  w j  1 )  p j  1 , j  n  1,,0 

xxjj11 0 1

fn ( U ) 0, U

VUGRAPH 24 Forward DP

∗ • Start with 푓푛 푈 = 0, ∀푈 = 0, 1, 2, … , 푉…successively evaluate ∗ ∗ 푓푛−1 푈 , ∀푈, then 푓푛−2 푈 , ∀푈, etc.

• Note: decision xj+1 depends on xj+2,…, xn ⇒ backward recursion • Alternately, suppose we know the optimal solution to Knap (1, j, U) … we could have solved it in one of two ways: ∗ . xj = 0 and knew the solution to Knap (1, j–1, U) = 푆푗−1 푈 ∗ . xj = 1 and knew the solution to Knap (1, j–1, U–wj) = 푆푗−1 푈 − 푤푗 + 푝푗 . So we can get a forward recursion

*** Sj( U ) max S j11 ( U ), S j ( U  w j )  p j  where

SUUSUU00( ) 0,   0 and ( )   ,   0

. Note: decision xj depends on x1,…, xj-1 ⇒ forward recursion

VUGRAPH 25 Example *** Sj( U ) max S j11 ( U ), S j ( U  w j )  p j 

• p = [1, 2, 5]; w = [2, 3, 4]; V = 6 SUUSUU00( ) 0,   0 and ( )   ,   0

U 0 1 2 3

(S0, x0) (S1, x1) (S2, x2) (S3, x3) 0 (0, –) (0 ,0) (0 ,0) (0, 0) 1 (0, –) (0 ,0 ) (0, 0) (0, 0) x1; x  0; x  1 2 (0 ,–) (1 ,1) (1, 0) (1, 0) 1 2 3 f *  6 3 (0, –) (1, 1) (2 ,1) (2, 0) 4 (0 ,–) (1, 1) (2, 1) (5, 1) 5 (0, –) (1, 1) (3, 1) (5, 1) 6 (0, –) (1, 1) (3, 1) (6 ,1)

• Computational load and storage O(nV) • It is considered a pseudo-polynomial algorithm, since V is not bounded by a log2V function 푛 • By encoding x as a bit string, can reduce storage to 푂 1 + 푉 , where d is 푑 the word length on the computer (see Martello and Toth’s book) VUGRAPH 26 Branch-and-bound method: Basic Idea

• Finds the solution via a systematic search of the solution space • For Knapsack problem, the solution space consists of 2n vectors of 0s and 1s • The solution space can be represented as a tree

x1 1 0

x 2 1 0 1 0

x3 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 x4 1 0 1 0 1 0 1 0

. Leaves correspond to “potential solutions,” not necessarily feasible . Key: o Don’t want to search the entire tree o Don’t want to generate infeasible solutions o Don’t want to generate tree nodes that do not lead to a better solution than the one on hand (⇒ a bounding function)

VUGRAPH 27 B&B for knapsack problem

• Bounding function can be derived from the LP relaxation . Given the current contents of the knapsack J with profit P and weight W, we can construct the bounding function as follows: o Suppose k is the last object considered, i.e., p & W correspond to 푦1 ⋯ 푦푘 (푦푖 = temp 푥푖), then n max p  pii y ik1 n s.t.  wii y V W ik1

0yi 1 o If the UB ≤ current best solution, then further search from a tree node is worthless So, backtrack ⇒ move to the right, if it was in the left one or go back to the first variable with value

VUGRAPH 28 Example

• We will explain the algorithm by means of an example (Syslo, Deo, and Kowalik)

max 2x1 3 x 2  6 x 3  3 x 4

s.t. x1 2 x 2  5 x 3  4 x 4  5

xi  {0,1}

• Initially p = W = 0, k = 0 & f = –1

. Partial solution: y1 = 1, y2 = 1 with profit = 5 and . Weight = 3 ⇒ p = 5, W = 3, k + 1 = 3 (2)(6) . Bound: b = 5 + int = 7 5 . Bound > f … perform a forward move

VUGRAPH 29 Example

y1 1 0

y 2 1 0 1 0

7 6 6 y3 0 0 0 1 0 6 6 0 0 1 y4 1 0 5 5 5 6 3 f = 5 5 ≤ 푓 5 ≤ 푓 f = 6 3 < f

• y3 = 0 (since can not fit in it) and y4 = 1 is infeasible

• Set y4 = 0 and k + 1 = 5 > n = 4 and f = –1, the current solution is the best found so far … so, set f = 5 and k = 4

• Backtrack to the last object assigned to knapsack … remove the object … y2 = 0 ⇒ p = 2, W = 1, y3 = 1 fails ⇒ y3 = 0 … but, y4 = 1 O.K. ⇒ y = [1, 0, 0, 1] and p = 5 … this does not improve the current best solution of f = 5

• Backtrack to y1 … set y1 = 0 after assigning y2 = 1, it returns bound b = 6 … b > f, we do another forward move … the next call to bound results in a bound 푏 ≤ 푓 ⇒ backward move

• Backtrack to y2 … set y2 = 0 after assigning y3 = 1, it returns bound b = 6 and a solution p = 6, W = 5, which is better than the best solution f…. Actually you could you have stopped here because upper bound = feasible solution

• Finally, y3 = 0, and bound assigns y4 = 1 and returns 3 … since there are no other objects in the knapsack to be removed, the algorithm terminates • There exist variations that provide better computational performance … see the book by Martello and Toth

VUGRAPH 30 Summary • Various versions of Knapsack problem • Approximation algorithms • Optimal algorithms . Dynamic programming . Branch-and-bound

VUGRAPH 31