Decomposition and Reformulation in Mixed-

IMA New Directions Short Course on Mathematical Optimization Jim Luedtke

Department of Industrial and Systems Engineering University of Wisconsin-Madison

August 11, 2016

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 1 / 47 2 Huge number of variables and/or constraints Just solving LP relaxation is very time-consuming Possible solution: cutting plane approach – but doesn’t address huge number of variables Another possibility: Lagrangian relaxation or May be able to help with either or both of these challenges Particularly useful if enables decomposition – splitting one large problem into many smaller ones

What makes integer programs hard to solve?

1 Weak LP relaxation bounds Pruning in branch-and-bound is rare – huge search trees Possible solution: use a better formulation or add (strong) valid inequalities

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 2 / 47 Another possibility: Lagrangian relaxation or column generation May be able to help with either or both of these challenges Particularly useful if enables decomposition – splitting one large problem into many smaller ones

What makes integer programs hard to solve?

1 Weak LP relaxation bounds Pruning in branch-and-bound is rare – huge search trees Possible solution: use a better formulation or add (strong) valid inequalities

2 Huge number of variables and/or constraints Just solving LP relaxation is very time-consuming Possible solution: cutting plane approach – but doesn’t address huge number of variables

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 2 / 47 What makes integer programs hard to solve?

1 Weak LP relaxation bounds Pruning in branch-and-bound is rare – huge search trees Possible solution: use a better formulation or add (strong) valid inequalities

2 Huge number of variables and/or constraints Just solving LP relaxation is very time-consuming Possible solution: cutting plane approach – but doesn’t address huge number of variables Another possibility: Lagrangian relaxation or column generation May be able to help with either or both of these challenges Particularly useful if enables decomposition – splitting one large problem into many smaller ones

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 2 / 47 Lagrangian Relaxation Outline

1 Lagrangian Relaxation Example Lagrangian Relaxation Bounds Solving the Lagrangian Dual

2 Dantzig-Wolfe Reformulation and Column Generation

3 Branch-and-Price

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 3 / 47 If only we didn’t have those pesky Dx ≤ d constraints . . .

Lagrangian Relaxation Motivation

Consider this IP:

K IP X > z = max ck xk k=1 Dx ≤ d

Akxk ≤ bk, k = 1,...,K nK x = (x1, . . . , xK ) ∈ Z+

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 4 / 47 Lagrangian Relaxation Motivation

Consider this IP:

K IP X > z = max ck xk k=1 Dx ≤ d

Akxk ≤ bk, k = 1,...,K nK x = (x1, . . . , xK ) ∈ Z+

If only we didn’t have those pesky Dx ≤ d constraints . . .

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 4 / 47 R PK The problem decomposes: z = k=1 zk where

 > n zk = max ck xk : Akxk ≤ bk, xk ∈ Z+

for k = 1,...,K

zR ≥ zIP , but dropping Dx ≤ d altogether is pretty severe

Lagrangian Relaxation Motivation

Drop the pesky constraints:

K R X > z = max ck xk k=1

Akxk ≤ bk, k = 1,...,K nK x = (x1, . . . , xK ) ∈ Z+

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 5 / 47 zR ≥ zIP , but dropping Dx ≤ d altogether is pretty severe

Lagrangian Relaxation Motivation

Drop the pesky constraints:

K R X > z = max ck xk k=1

Akxk ≤ bk, k = 1,...,K nK x = (x1, . . . , xK ) ∈ Z+

R PK The problem decomposes: z = k=1 zk where

 > n zk = max ck xk : Akxk ≤ bk, xk ∈ Z+ for k = 1,...,K

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 5 / 47 Lagrangian Relaxation Motivation

Drop the pesky constraints:

K R X > z = max ck xk k=1

Akxk ≤ bk, k = 1,...,K nK x = (x1, . . . , xK ) ∈ Z+

R PK The problem decomposes: z = k=1 zk where

 > n zk = max ck xk : Akxk ≤ bk, xk ∈ Z+ for k = 1,...,K zR ≥ zIP , but dropping Dx ≤ d altogether is pretty severe

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 5 / 47 Lagrangian relaxation: relax the constraints Dx ≤ d by dualizing them – adding them to the objective with a penalty for violation Problem with just constraints Ax ≤ b should be easier to solve We’ll discuss how to choose the constraints to dualize later n For simplicity we assume x ∈ Z+

Lagrangian Relaxation Lagrangian Relaxation

Simplify notation: Let A be the matrix combining all Ak submatrices, and b = (b1, . . . , bK ), c = (c1, . . . , cK )

zIP = max c>x Dx ≤ d Ax ≤ b n x ∈ Z+

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 6 / 47 Problem with just constraints Ax ≤ b should be easier to solve We’ll discuss how to choose the constraints to dualize later n For simplicity we assume x ∈ Z+

Lagrangian Relaxation Lagrangian Relaxation

Simplify notation: Let A be the matrix combining all Ak submatrices, and b = (b1, . . . , bK ), c = (c1, . . . , cK )

zIP = max c>x Dx ≤ d Ax ≤ b n x ∈ Z+

Lagrangian relaxation: relax the constraints Dx ≤ d by dualizing them – adding them to the objective with a penalty for violation

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 6 / 47 Lagrangian Relaxation Lagrangian Relaxation

Simplify notation: Let A be the matrix combining all Ak submatrices, and b = (b1, . . . , bK ), c = (c1, . . . , cK )

zIP = max c>x Dx ≤ d Ax ≤ b n x ∈ Z+

Lagrangian relaxation: relax the constraints Dx ≤ d by dualizing them – adding them to the objective with a penalty for violation Problem with just constraints Ax ≤ b should be easier to solve We’ll discuss how to choose the constraints to dualize later n For simplicity we assume x ∈ Z+

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 6 / 47 Why? Let x∗ be an optimal solution to zIP (zIP = c>x∗ and d − Dx∗ ≥ 0). z(u) ≥ c>x∗ + u>(d − Dx∗) ≥ zIP

Lagrangian Relaxation Lagrangian Relaxation n First, re-write our IP, with X := {x ∈ Z+ : Ax ≤ b}

zIP = max c>x subject to Dx ≤ d x ∈ X

m Let u ∈ R+ , and define the following Lagrangian Relaxation problem:

IP (u): z(u) = max{c>x + u>(d − Dx): x ∈ X}

Theorem m IP For any u ∈ R+ , z(u) ≥ z .

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 7 / 47 Lagrangian Relaxation Lagrangian Relaxation n First, re-write our IP, with X := {x ∈ Z+ : Ax ≤ b}

zIP = max c>x subject to Dx ≤ d x ∈ X

m Let u ∈ R+ , and define the following Lagrangian Relaxation problem:

IP (u): z(u) = max{c>x + u>(d − Dx): x ∈ X}

Theorem m IP For any u ∈ R+ , z(u) ≥ z . Why? Let x∗ be an optimal solution to zIP (zIP = c>x∗ and d − Dx∗ ≥ 0). z(u) ≥ c>x∗ + u>(d − Dx∗) ≥ zIP

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 7 / 47 Modification with equality constraints Dx = d: variables u are free

Lagrangian Relaxation Lagrangian Dual

m For u ∈ R+ ,

z(u) = max{c>x + u>(d − Dx): x ∈ X}

Definition The problem: LD m w = min{z(u): u ∈ R+ } is called a Lagrangian dual.

Properties wLD ≥ zIP LD m w ≤ z(u) for all u ∈ R+

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 8 / 47 Lagrangian Relaxation Lagrangian Dual

m For u ∈ R+ ,

z(u) = max{c>x + u>(d − Dx): x ∈ X}

Definition The problem: LD m w = min{z(u): u ∈ R+ } is called a Lagrangian dual.

Properties wLD ≥ zIP LD m w ≤ z(u) for all u ∈ R+ Modification with equality constraints Dx = d: variables u are free

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 8 / 47 (1) ⇒ xˆ is feasible (2) ⇒ zIP ≤ z(u) = c>xˆ + u>(d − Dxˆ) = c>xˆ

Lagrangian Relaxation Lagrangian Dual

IP (u): z(u) = max{c>x + u>(d − Dx): x ∈ X} Theorem m Let u ∈ R+ and xˆ be an optimal solution to IP (u). If, (1) Dxˆ ≤ d, and (2) u>(d − Dxˆ) = 0, then xˆ is an optimal solution to IP. Note: The second condition is necessary: xˆ could be feasible to IP, but not optimal.

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 9 / 47 Lagrangian Relaxation Lagrangian Dual

IP (u): z(u) = max{c>x + u>(d − Dx): x ∈ X} Theorem m Let u ∈ R+ and xˆ be an optimal solution to IP (u). If, (1) Dxˆ ≤ d, and (2) u>(d − Dxˆ) = 0, then xˆ is an optimal solution to IP. Note: The second condition is necessary: xˆ could be feasible to IP, but not optimal. (1) ⇒ xˆ is feasible (2) ⇒ zIP ≤ z(u) = c>xˆ + u>(d − Dxˆ) = c>xˆ

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 9 / 47 Lagrangian Relaxation Example Example: Stochastic Integer Programming

Extensive form of SIP

S SMIP > X > z = min c x + psqs ys s=1 s.t. Ax ≥ b

Tsx + Wsys = hs s = 1,...,S n1 p1 x ∈ R+ × Z+ n2 p2 ys ∈ R+ × Z+ , s = 1,...,S

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 10 / 47 Lagrangian Relaxation Example Example: Stochastic Integer Programming

Copy first-stage variables

S SMIP X > > z = min ps(c xs + qs ys) s=1

Axs ≥ bs = 1,...,S

Tsxs + Wsys = hs s = 1,...,S S X xs = ps0 xs0 s = 1,...,S s0=1 n1 p1 xs ∈ R+ × Z+ ,s = 1,...,S n2 p2 ys ∈ R+ × Z+ , s = 1,...,S

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 10 / 47 Relax these constraints using Lagrangian Relaxation with dual vectors λ = (λ1, . . . , λS):

PS > > PS > PS  L(λ) := min s=1 ps(c xs + qs ys)+ s=1 psλs xs − s0=1 ps0 xs0

Axs ≥ b s = 1,...,S

Tsxs + Wsys = hs s = 1,...,S n1 p1 xs ∈ R+ × Z+ , s = 1,...,S n2 p2 ys ∈ R+ × Z+ , s = 1,...,S

¯ PS Rewrite the objective (λ = s=1 psλs): PS  ¯ > >  s=1 ps c + (λs − λ) xs + qs ys

Lagrangian Relaxation Example Relax Nonanticipativity PS The constraints xs = s0=1 ps0 xs0 are called nonanticipativity constraints.

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 11 / 47 ¯ PS Rewrite the objective (λ = s=1 psλs): PS  ¯ > >  s=1 ps c + (λs − λ) xs + qs ys

Lagrangian Relaxation Example Relax Nonanticipativity PS The constraints xs = s0=1 ps0 xs0 are called nonanticipativity constraints. Relax these constraints using Lagrangian Relaxation with dual vectors λ = (λ1, . . . , λS):

PS > > PS > PS  L(λ) := min s=1 ps(c xs + qs ys)+ s=1 psλs xs − s0=1 ps0 xs0

Axs ≥ b s = 1,...,S

Tsxs + Wsys = hs s = 1,...,S n1 p1 xs ∈ R+ × Z+ , s = 1,...,S n2 p2 ys ∈ R+ × Z+ , s = 1,...,S

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 11 / 47 Lagrangian Relaxation Example Relax Nonanticipativity PS The constraints xs = s0=1 ps0 xs0 are called nonanticipativity constraints. Relax these constraints using Lagrangian Relaxation with dual vectors λ = (λ1, . . . , λS):

PS > > PS > PS  L(λ) := min s=1 ps(c xs + qs ys)+ s=1 psλs xs − s0=1 ps0 xs0

Axs ≥ b s = 1,...,S

Tsxs + Wsys = hs s = 1,...,S n1 p1 xs ∈ R+ × Z+ , s = 1,...,S n2 p2 ys ∈ R+ × Z+ , s = 1,...,S

¯ PS Rewrite the objective (λ = s=1 psλs): PS  ¯ > >  s=1 ps c + (λs − λ) xs + qs ys

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 11 / 47 Lagrangian Relaxation Example Relax Nonanticipativity

¯ PS Rewritten objective (λ = s=1 psλs):

S X  ¯ > >  ps c + (λs − λ) xs + qs ys s=1

Normalize λs so that λ¯ = 0 P Lagrangian relaxation problem decomposes: L(λ) = s psDs(λs) where

> > Ds(λs) := min (c + λs) x + qs ys s.t. Ax ≥ b, Tsx + Wsy = hs n1 p1 n2 p2 x ∈ R+ × Z+ , y ∈ R+ × Z+

Each subproblem is a deterministic mixed-integer program

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 12 / 47 Lagrangian Relaxation Example Lagrangian Dual Problem

P For any λ = (λ1, . . . , λS) with s psλs = 0,

L(λ) ≤ zSMIP

Lagrangian dual Find best lower bound:

S LD n X o w := max L(λ): psλs = 0 s=1

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 13 / 47 Theorem wLD = maxc>x : Dx ≤ d, x ∈ conv(X)

n Let P = {x ∈ R+ : Ax ≤ b} zLP = maxc>x : Dx ≤ d, x ∈ P

P ⊇ conv(X), so wLD ≤ zLP If P = conv(X), then wLD = zLP wLD < zLP is only possible if P 6= conv(X) Let’s prove it for special case: X bounded, all integer vars

Lagrangian Relaxation Lagrangian Relaxation Bounds How Good is the Bound From the Lagrangian dual? The integer program we’re trying to solve

zIP = maxc>x : Dx ≤ d, x ∈ X (IP)

n where X = {x ∈ Z+ : Ax ≤ b}

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 14 / 47 n Let P = {x ∈ R+ : Ax ≤ b} zLP = maxc>x : Dx ≤ d, x ∈ P

P ⊇ conv(X), so wLD ≤ zLP If P = conv(X), then wLD = zLP wLD < zLP is only possible if P 6= conv(X) Let’s prove it for special case: X bounded, all integer vars

Lagrangian Relaxation Lagrangian Relaxation Bounds How Good is the Bound From the Lagrangian dual? The integer program we’re trying to solve

zIP = maxc>x : Dx ≤ d, x ∈ X (IP)

n where X = {x ∈ Z+ : Ax ≤ b} Theorem wLD = maxc>x : Dx ≤ d, x ∈ conv(X)

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 14 / 47 Lagrangian Relaxation Lagrangian Relaxation Bounds How Good is the Bound From the Lagrangian dual? The integer program we’re trying to solve

zIP = maxc>x : Dx ≤ d, x ∈ X (IP)

n where X = {x ∈ Z+ : Ax ≤ b} Theorem wLD = maxc>x : Dx ≤ d, x ∈ conv(X)

n Let P = {x ∈ R+ : Ax ≤ b} zLP = maxc>x : Dx ≤ d, x ∈ P

P ⊇ conv(X), so wLD ≤ zLP If P = conv(X), then wLD = zLP wLD < zLP is only possible if P 6= conv(X) Let’s prove it for special case: X bounded, all integer vars Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 14 / 47 In general wLD < zSMIP But wLD ≥ zSLP (the usual LP relaxation) wLD at least as good as any bound obtained using cuts in single scenario subproblems In many test instances, wLD is very close to zSMIP

Lagrangian Relaxation Lagrangian Relaxation Bounds Strength of Lagrangian Dual of SMIP Theorem The Lagrangian dual bound satisfies

( S ) LD > X w = min c x + psys :(x, ys) ∈ conv(Xs), s = 1,...,S s=1 where for s = 1,...,S

Xs := {(x, y): Ax ≥ b, Tsx + Wsy = hs n1 p1 n2 p2 x ∈ R+ × Z+ , y ∈ R+ × Z+ }

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 15 / 47 wLD at least as good as any bound obtained using cuts in single scenario subproblems In many test instances, wLD is very close to zSMIP

Lagrangian Relaxation Lagrangian Relaxation Bounds Strength of Lagrangian Dual of SMIP Theorem The Lagrangian dual bound satisfies

( S ) LD > X w = min c x + psys :(x, ys) ∈ conv(Xs), s = 1,...,S s=1 where for s = 1,...,S

Xs := {(x, y): Ax ≥ b, Tsx + Wsy = hs n1 p1 n2 p2 x ∈ R+ × Z+ , y ∈ R+ × Z+ }

In general wLD < zSMIP But wLD ≥ zSLP (the usual LP relaxation)

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 15 / 47 Lagrangian Relaxation Lagrangian Relaxation Bounds Strength of Lagrangian Dual of SMIP Theorem The Lagrangian dual bound satisfies

( S ) LD > X w = min c x + psys :(x, ys) ∈ conv(Xs), s = 1,...,S s=1 where for s = 1,...,S

Xs := {(x, y): Ax ≥ b, Tsx + Wsy = hs n1 p1 n2 p2 x ∈ R+ × Z+ , y ∈ R+ × Z+ }

In general wLD < zSMIP But wLD ≥ zSLP (the usual LP relaxation) wLD at least as good as any bound obtained using cuts in single scenario subproblems In many test instances, wLD is very close to zSMIP Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 15 / 47 Key insight z(u) is a piecewise linear convex function of u. We already have seen

z(u) = maxc>xt + u>(d − Dxt): t = 1,...,T

where {xt : t = 1,...,T } are the finitely many points in n X = {x ∈ Z+ : Ax ≤ b}. Extends also to mixed-integer, unbounded sets (like Benders analysis)

Lagrangian Relaxation Solving the Lagrangian Dual How to solve the Lagrangian dual?

Lagrangian dual wLD = min{z(u): u ≥ 0} How to find u∗ that solves this optimization problem?

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 16 / 47 Lagrangian Relaxation Solving the Lagrangian Dual How to solve the Lagrangian dual?

Lagrangian dual wLD = min{z(u): u ≥ 0} How to find u∗ that solves this optimization problem? Key insight z(u) is a piecewise linear convex function of u. We already have seen

z(u) = maxc>xt + u>(d − Dxt): t = 1,...,T where {xt : t = 1,...,T } are the finitely many points in n X = {x ∈ Z+ : Ax ≤ b}. Extends also to mixed-integer, unbounded sets (like Benders analysis)

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 16 / 47 Lagrangian Relaxation Solving the Lagrangian Dual Option 1: Subgradient

Lagrangian dual: A nonsmooth problem

wLD = min{z(u): u ≥ 0}

Assume for this algorithm that X is bounded so conv(X) has no rays Recall m m Let f : R → R and u ∈ R . A vector γ(u) is called a subgradient of f at u if > m f(v) ≥ f(u) + γ(u) (v − u) ∀v ∈ R

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 17 / 47 Notes:

The values µk are step sizes that satisfy at a minimum: µk > 0 and µk → 0 as k → ∞

Lagrangian Relaxation Solving the Lagrangian Dual Option 1: Subgradient

Subgradient algorithm for generic convex problem: min{f(u): u ≥ 0}

1 Initialize: u = u0 2 Iteration k ≥ 0: Calculate f(uk) and find a subgradient γ(uk) of f at uk Step: k+1 k k u = max{u − µkγ(u ), 0}

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 18 / 47 Lagrangian Relaxation Solving the Lagrangian Dual Option 1: Subgradient

Subgradient algorithm for generic convex problem: min{f(u): u ≥ 0}

1 Initialize: u = u0 2 Iteration k ≥ 0: Calculate f(uk) and find a subgradient γ(uk) of f at uk Step: k+1 k k u = max{u − µkγ(u ), 0} Notes:

The values µk are step sizes that satisfy at a minimum: µk > 0 and µk → 0 as k → ∞

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 18 / 47 Lagrangian Relaxation Solving the Lagrangian Dual Option 1: Subgradient

Subgradient algorithm for Lagrangian dual problem:

min{z(u): u ≥ 0}

1 Initialize: u = u0 2 Iteration k ≥ 0: Calculate z(uk) and find a subgradient γ(uk) of z at uk Step: k+1 k k u = max{u − µkγ(u ), 0}

Key question: How to calculate z(uk) and γ(uk)?

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 19 / 47 z(uk) = maxc>x + (uk)>(d − Dx): x ∈ X

Subgradients of z Let xk ∈ X be an optimal solution to calculation of z(uk). Then

γ(uk) = d − Dxk

is a subgradient of z at uk.

Lagrangian Relaxation Solving the Lagrangian Dual Option 1: Subgradient

Subgradient algorithm for Lagrangian dual problem:

min{z(u): u ≥ 0}

How to calculate z(uk) and γ(uk)?

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 20 / 47 Subgradients of z Let xk ∈ X be an optimal solution to calculation of z(uk). Then

γ(uk) = d − Dxk

is a subgradient of z at uk.

Lagrangian Relaxation Solving the Lagrangian Dual Option 1: Subgradient

Subgradient algorithm for Lagrangian dual problem:

min{z(u): u ≥ 0}

How to calculate z(uk) and γ(uk)?

z(uk) = maxc>x + (uk)>(d − Dx): x ∈ X

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 20 / 47 Lagrangian Relaxation Solving the Lagrangian Dual Option 1: Subgradient

Subgradient algorithm for Lagrangian dual problem:

min{z(u): u ≥ 0}

How to calculate z(uk) and γ(uk)?

z(uk) = maxc>x + (uk)>(d − Dx): x ∈ X

Subgradients of z Let xk ∈ X be an optimal solution to calculation of z(uk). Then

γ(uk) = d − Dxk is a subgradient of z at uk.

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 20 / 47 ≥ c>xk + v>(d − Dxk) = c>xk + (uk)>(d − Dxk) + v>(d − Dxk) − (uk)>(d − Dxk) = z(uk) + (v − uk)>(d − Dxk) = z(uk) + (v − uk)>γ(uk)

Lagrangian Relaxation Solving the Lagrangian Dual Option 1: Subgradient Subgradients of z Let xk ∈ X be an optimal solution to calculation of z(uk). Then

γ(uk) = d − Dxk is a subgradient of z at uk. m Proof: Let v ∈ R .

z(v) = maxc>x + v>(d − Dx): x ∈ X

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 21 / 47 = c>xk + (uk)>(d − Dxk) + v>(d − Dxk) − (uk)>(d − Dxk) = z(uk) + (v − uk)>(d − Dxk) = z(uk) + (v − uk)>γ(uk)

Lagrangian Relaxation Solving the Lagrangian Dual Option 1: Subgradient Subgradients of z Let xk ∈ X be an optimal solution to calculation of z(uk). Then

γ(uk) = d − Dxk is a subgradient of z at uk. m Proof: Let v ∈ R .

z(v) = maxc>x + v>(d − Dx): x ∈ X ≥ c>xk + v>(d − Dxk)

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 21 / 47 = z(uk) + (v − uk)>(d − Dxk) = z(uk) + (v − uk)>γ(uk)

Lagrangian Relaxation Solving the Lagrangian Dual Option 1: Subgradient Subgradients of z Let xk ∈ X be an optimal solution to calculation of z(uk). Then

γ(uk) = d − Dxk is a subgradient of z at uk. m Proof: Let v ∈ R .

z(v) = maxc>x + v>(d − Dx): x ∈ X ≥ c>xk + v>(d − Dxk) = c>xk + (uk)>(d − Dxk) + v>(d − Dxk) − (uk)>(d − Dxk)

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 21 / 47 = z(uk) + (v − uk)>γ(uk)

Lagrangian Relaxation Solving the Lagrangian Dual Option 1: Subgradient Subgradients of z Let xk ∈ X be an optimal solution to calculation of z(uk). Then

γ(uk) = d − Dxk is a subgradient of z at uk. m Proof: Let v ∈ R .

z(v) = maxc>x + v>(d − Dx): x ∈ X ≥ c>xk + v>(d − Dxk) = c>xk + (uk)>(d − Dxk) + v>(d − Dxk) − (uk)>(d − Dxk) = z(uk) + (v − uk)>(d − Dxk)

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 21 / 47 Lagrangian Relaxation Solving the Lagrangian Dual Option 1: Subgradient Subgradients of z Let xk ∈ X be an optimal solution to calculation of z(uk). Then

γ(uk) = d − Dxk is a subgradient of z at uk. m Proof: Let v ∈ R .

z(v) = maxc>x + v>(d − Dx): x ∈ X ≥ c>xk + v>(d − Dxk) = c>xk + (uk)>(d − Dxk) + v>(d − Dxk) − (uk)>(d − Dxk) = z(uk) + (v − uk)>(d − Dxk) = z(uk) + (v − uk)>γ(uk)

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 21 / 47 Lagrangian Relaxation Solving the Lagrangian Dual Option 1: Subgradient

Advantages: easy to implement, can potentially get good improvement in few iterations

Disadvantage: convergence can be slow because it ignores history of information (past solution values and subgradients)

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 22 / 47 Lagrangian Relaxation Solving the Lagrangian Dual Solving the Lagrangian Dual: Option 2

We have already seen:

wLD = min{z(u): u ≥ 0} = min η + d>u s.t. η + u>(Dxt) ≥ c>xt, t = 1,...,T, u ≥ 0

This last problem is a linear program with a huge number of constraints Solve by a cutting plane algorithm – Just like Benders decomposition!

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 23 / 47 Use a stabilization technique to limit “bouncing around” k Trust region: restrict k u − u k≤ αk in master LP k Augmented objective: add λk k u − u k to master LP objective Bundle-level: solve unrestricted LP to obtain L, then solve a second problem finding closest u to uk that obtains a minimum improvement in the objective

Lagrangian Relaxation Solving the Lagrangian Dual Solving the Lagrangian Dual: Other Options

Lagrangian dual min{z(u): u ≥ 0} Bundle methods: methods for nonsmooth optimization Use all (or much of) past subgradients as in cutting plane algorithm

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 24 / 47 k Augmented objective: add λk k u − u k to master LP objective Bundle-level: solve unrestricted LP to obtain L, then solve a second problem finding closest u to uk that obtains a minimum improvement in the objective

Lagrangian Relaxation Solving the Lagrangian Dual Solving the Lagrangian Dual: Other Options

Lagrangian dual min{z(u): u ≥ 0} Bundle methods: Nonlinear programming methods for nonsmooth optimization Use all (or much of) past subgradients as in cutting plane algorithm Use a stabilization technique to limit “bouncing around” k Trust region: restrict k u − u k≤ αk in master LP

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 24 / 47 Bundle-level: solve unrestricted LP to obtain L, then solve a second problem finding closest u to uk that obtains a minimum improvement in the objective

Lagrangian Relaxation Solving the Lagrangian Dual Solving the Lagrangian Dual: Other Options

Lagrangian dual min{z(u): u ≥ 0} Bundle methods: Nonlinear programming methods for nonsmooth optimization Use all (or much of) past subgradients as in cutting plane algorithm Use a stabilization technique to limit “bouncing around” k Trust region: restrict k u − u k≤ αk in master LP k Augmented objective: add λk k u − u k to master LP objective

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 24 / 47 Lagrangian Relaxation Solving the Lagrangian Dual Solving the Lagrangian Dual: Other Options

Lagrangian dual min{z(u): u ≥ 0} Bundle methods: Nonlinear programming methods for nonsmooth optimization Use all (or much of) past subgradients as in cutting plane algorithm Use a stabilization technique to limit “bouncing around” k Trust region: restrict k u − u k≤ αk in master LP k Augmented objective: add λk k u − u k to master LP objective Bundle-level: solve unrestricted LP to obtain L, then solve a second problem finding closest u to uk that obtains a minimum improvement in the objective

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 24 / 47 Leave some “hard” constraints undualized (convex hull not described by the inequalities) Bound can be significantly better than LP bound Solving the subproblems can be more difficult If enables decomposition, the “hard” subproblems may not be too bad in practice – e.g. knapsack

Lagrangian Relaxation Solving the Lagrangian Dual Choosing the Lagrangian Dual Trade-offs in choosing constraints to dualize zIP = max c>x Dx ≤ d, Ax ≤ b n x ∈ Z+ Dualize “hard” constraints leaving something easy (convex hull described by remaining inequalities) Bound is only as strong as the LP bound Subproblems (for fixed u) will be easy to solve Might be more efficient than LP

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 25 / 47 Lagrangian Relaxation Solving the Lagrangian Dual Choosing the Lagrangian Dual Trade-offs in choosing constraints to dualize zIP = max c>x Dx ≤ d, Ax ≤ b n x ∈ Z+ Dualize “hard” constraints leaving something easy (convex hull described by remaining inequalities) Bound is only as strong as the LP bound Subproblems (for fixed u) will be easy to solve Might be more efficient than LP Leave some “hard” constraints undualized (convex hull not described by the inequalities) Bound can be significantly better than LP bound Solving the subproblems can be more difficult If enables decomposition, the “hard” subproblems may not be too bad in practice – e.g. knapsack

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 25 / 47 2 Use it as a relaxation within a branch-and-bound search Instead of solving LP relaxations! Branching is problem-specific (we’ll see this later)

Lagrangian Relaxation Solving the Lagrangian Dual What to do After Solving the Lagrangian Dual?

1 Use it as a basis for heuristics Problem-specific E.g., fix some variables based on Lagrangian subproblem and solve a smaller problem Often, relatively easy to restore feasibility E.g., in stochastic IP, fix first-stage vars, solve second-stage problems

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 26 / 47 Lagrangian Relaxation Solving the Lagrangian Dual What to do After Solving the Lagrangian Dual?

1 Use it as a basis for heuristics Problem-specific E.g., fix some variables based on Lagrangian subproblem and solve a smaller problem Often, relatively easy to restore feasibility E.g., in stochastic IP, fix first-stage vars, solve second-stage problems

2 Use it as a relaxation within a branch-and-bound search Instead of solving LP relaxations! Branching is problem-specific (we’ll see this later)

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 26 / 47 Dantzig-Wolfe Reformulation and Column Generation Outline

1 Lagrangian Relaxation

2 Dantzig-Wolfe Reformulation and Column Generation Dantzig-Wolfe Reformulation Column Generation

3 Branch-and-Price

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 27 / 47 Dantzig-Wolfe Reformulation and Column Generation Dantzig-Wolfe Reformulation Motivation

Consider this IP:

K IP X > z = max ck xk k=1 Dx ≤ d,

xk ∈ Xk, k = 1,...,K

n Where Xk = {x ∈ Z+ : Akx ≤ bk} Could relax the constraints Dx ≤ d, as in Lagrangian relaxation Different option: Dantzig-Wolfe Reformulation

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 28 / 47 Dantzig-Wolfe Reformulation and Column Generation Dantzig-Wolfe Reformulation Dantzig-Wolfe Reformulation

Assume Xk is bounded, so it has finitely many points: k,t Tk I.e., Xk = {x }t=1

Then, xk ∈ Xk if and only if there exists λk,t ∈ {0, 1}, t = 1,...,Tk such that:

Tk Tk X k,t X xk = λk,tx , λk,t = 1 t=1 t=1

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 29 / 47 Now, substitute out the xk variables...

Dantzig-Wolfe Reformulation and Column Generation Dantzig-Wolfe Reformulation Dantzig-Wolfe Reformulation (2)

Replace the constraints xk ∈ Xk with λ formulation yields:

K IP X > z = max ck xk k=1 K X Dkxk ≤ d, k=1

Tk Tk X k,t X xk = λk,tx , λk,t = 1, k = 1,...,K t=1 t=1

λk,t ∈ {0, 1}, t = 1,...,Tk, k = 1,...,K

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 30 / 47 Dantzig-Wolfe Reformulation and Column Generation Dantzig-Wolfe Reformulation Dantzig-Wolfe Reformulation (2)

Replace the constraints xk ∈ Xk with λ formulation yields:

K IP X > z = max ck xk k=1 K X Dkxk ≤ d, k=1

Tk Tk X k,t X xk = λk,tx , λk,t = 1, k = 1,...,K t=1 t=1

λk,t ∈ {0, 1}, t = 1,...,Tk, k = 1,...,K

Now, substitute out the xk variables...

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 30 / 47 Dantzig-Wolfe Reformulation and Column Generation Dantzig-Wolfe Reformulation Dantzig-Wolfe Reformulation (3)

Obtain the Dantzig-Wolfe Reformulation

K Tk X X > k,t max ck x λk,t k=1 t=1

K Tk X X k,t Dkx λk,t ≤ d k=1 t=1 T Xk λk,t = 1, k = 1,...,K t=1

λk,t ∈ {0, 1}, t = 1,...,Tk, k = 1,...,K

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 31 / 47 2 What is the strength of the LP relaxation, compared to the original formulation? 3 If we can solve the LP relaxation, then how should we use it in branch-and-bound? We’ve already seen the answer to the first two questions!

Dantzig-Wolfe Reformulation and Column Generation Dantzig-Wolfe Reformulation Questions about Dantzig-Wolfe formulation

1 How can we solve the relaxation? (Since it may have exponentially many variables)

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 32 / 47 3 If we can solve the LP relaxation, then how should we use it in branch-and-bound? We’ve already seen the answer to the first two questions!

Dantzig-Wolfe Reformulation and Column Generation Dantzig-Wolfe Reformulation Questions about Dantzig-Wolfe formulation

1 How can we solve the linear programming relaxation? (Since it may have exponentially many variables) 2 What is the strength of the LP relaxation, compared to the original formulation?

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 32 / 47 We’ve already seen the answer to the first two questions!

Dantzig-Wolfe Reformulation and Column Generation Dantzig-Wolfe Reformulation Questions about Dantzig-Wolfe formulation

1 How can we solve the linear programming relaxation? (Since it may have exponentially many variables) 2 What is the strength of the LP relaxation, compared to the original formulation? 3 If we can solve the LP relaxation, then how should we use it in branch-and-bound?

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 32 / 47 Dantzig-Wolfe Reformulation and Column Generation Dantzig-Wolfe Reformulation Questions about Dantzig-Wolfe formulation

1 How can we solve the linear programming relaxation? (Since it may have exponentially many variables) 2 What is the strength of the LP relaxation, compared to the original formulation? 3 If we can solve the LP relaxation, then how should we use it in branch-and-bound? We’ve already seen the answer to the first two questions!

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 32 / 47 K > X n > > o = u d + max (ck − u Dk)x : x ∈ Xk k=1

Dantzig-Wolfe Reformulation and Column Generation Dantzig-Wolfe Reformulation Let’s revisit our IP

K IP X > z = max ck xk k=1 K X Dkxk ≤ d, k=1

xk ∈ Xk, k = 1,...,K

m For u ∈ R+ , Lagrangian subproblem is:

K K nX > > X  o z(u) = max ck xk + u d − Dkxk : xk ∈ Xk, k = 1,...,K k=1 k=1

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 33 / 47 Dantzig-Wolfe Reformulation and Column Generation Dantzig-Wolfe Reformulation Let’s revisit our IP

K IP X > z = max ck xk k=1 K X Dkxk ≤ d, k=1

xk ∈ Xk, k = 1,...,K

m For u ∈ R+ , Lagrangian subproblem is:

K K nX > > X  o z(u) = max ck xk + u d − Dkxk : xk ∈ Xk, k = 1,...,K k=1 k=1 K > X n > > o = u d + max (ck − u Dk)x : x ∈ Xk k=1

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 33 / 47 Dantzig-Wolfe Reformulation and Column Generation Dantzig-Wolfe Reformulation Lagrangian Relaxation, cont’d

wLD = min z(u) u≥0 K > X n > > o = min u d + max (ck − u Dk)x : x ∈ Xk u≥0 k=1 K > X = min u d + ηk k=1 > k,t > k,t ηk + u Dkx ≥ ck x , t = 1,...,Tk, k = 1,...,K u ≥ 0

 k,t where x : t = 1,...,Tk are the extreme points of conv(Xk) for all k

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 34 / 47 Dantzig-Wolfe Reformulation and Column Generation Dantzig-Wolfe Reformulation Lagrangian Relaxation: Take the Dual

K LD > X w = min u d + ηk k=1 > k,t > k,t ηk + u Dkx ≥ ck x , t = 1,...,Tk, k = 1,...,K

K Tk X X > k,t = max ck x λk,t k=1 t=1

K Tk X X k,t Dkx λk,t ≤ d, k=1 t=1 T Xk λk,t = 1, k = 1,...,K, t=1

λk,t ≥ 0, t = 1,...,Tk, k = 1,...,K

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 35 / 47 Dantzig-Wolfe Reformulation and Column Generation Dantzig-Wolfe Reformulation Dual of the LP Formulation

K Tk LD X X > k,t w = max ck x λk,t k=1 t=1

K Tk X X k,t Dkx λk,t ≤ d, k=1 t=1 T Xk λk,t = 1, k = 1,...,K, t=1

λk,t ≥ 0, t = 1,...,Tk, k = 1,...,K

Does this look familiar?

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 36 / 47 2 What is the strength of the LP relaxation, compared to the original formulation? Exactly equal to the Lagrangian relaxation bound Always at least as good as original LP relaxation n Exactly equal to LP relaxation if conv(Xk) = {x ∈ R k : Akx ≤ bk} for all k

Dantzig-Wolfe Reformulation and Column Generation Dantzig-Wolfe Reformulation Questions About Dantzig-Wolfe Formulation

1 How can we solve the linear programming relaxation? Column generation: Adding a variable corresponds to adding a constraint in Lagrangian Dual problem “Pricing problem” will be identical to Lagrangian relaxation subproblem

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 37 / 47 Dantzig-Wolfe Reformulation and Column Generation Dantzig-Wolfe Reformulation Questions About Dantzig-Wolfe Formulation

1 How can we solve the linear programming relaxation? Column generation: Adding a variable corresponds to adding a constraint in Lagrangian Dual problem “Pricing problem” will be identical to Lagrangian relaxation subproblem 2 What is the strength of the LP relaxation, compared to the original formulation? Exactly equal to the Lagrangian relaxation bound Always at least as good as original LP relaxation n Exactly equal to LP relaxation if conv(Xk) = {x ∈ R k : Akx ≤ bk} for all k

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 37 / 47 Dantzig-Wolfe Reformulation and Column Generation Column Generation Formulating With a Huge Numbers of Variables

DW reformulation ⇒ Formulation with a huge number of variables as a reformulation of a given problem In many applications, it is conceptually easier to start with a formulation having a huge number of variables Can also be useful in avoiding symmetry in alternative formulations

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 38 / 47 Dantzig-Wolfe Reformulation and Column Generation Column Generation Example: Cutting Stock Problem

A steel company makes a set of products I

Product i ∈ I has width wi > 0

Product i ∈ I has demand bi > 0 Products are made by cutting them from a roll of length L Multiple products can be cut from a roll

E.g., L = 10, w1 = 4, w2 = 3 One roll can make one product 1 and two of product 2 (4 + 2 ∗ 3 = 10) Let’s formulate an IP to minimize number of rolls used

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 39 / 47 Let xp = number of pattern p to cut X min xp p∈P X s.t. aipxp ≥ bi, i ∈ I p∈P

xp ∈ Z+, p ∈ P

Dantzig-Wolfe Reformulation and Column Generation Column Generation Example: Cutting Stock Problem

A steel company makes a set of products I

Product i ∈ I has width wi > 0

Product i ∈ I has demand bi > 0 New set: P - Set of all possible “cutting patterns”

aip = number of product i made by pattern p ∈ P

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 40 / 47 Dantzig-Wolfe Reformulation and Column Generation Column Generation Example: Cutting Stock Problem

A steel company makes a set of products I

Product i ∈ I has width wi > 0

Product i ∈ I has demand bi > 0 New set: P - Set of all possible “cutting patterns”

aip = number of product i made by pattern p ∈ P

Let xp = number of pattern p to cut X min xp p∈P X s.t. aipxp ≥ bi, i ∈ I p∈P

xp ∈ Z+, p ∈ P

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 40 / 47 Dantzig-Wolfe Reformulation and Column Generation Column Generation Column Generation for Cutting Stock

Let P 0 be a given subset of cutting patterns E.g., include all patterns producing just one product to ensure feasibilty Restricted Master LP Dual X X min xp max biπi p∈P 0 i∈I X X 0 s.t. aipxp ≥ bi, i ∈ I s.t. aipπi ≤ 1, p ∈ P p∈P 0 i∈I 0 0 xp ≥ 0, p ∈ P xp ≥ 0, p ∈ P Let (ˆx, πˆ) be an optimal primal/dual solution. xˆ optimal ⇔ πˆ feasible

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 41 / 47 Branch-and-Price Outline

1 Lagrangian Relaxation

2 Dantzig-Wolfe Reformulation and Column Generation

3 Branch-and-Price

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 42 / 47 Branch-and-Price Branch-and-Price

What to do after solving the LP relaxation by column generation?

Possibilities: Heuristics: E.g., start MIP solve with this subset of columns If find a feasible solution close to column generation LP bound, you may be happy If you need an optimal solution (or better than you have found) . . . Branch-and-price: Do column generation at every node!

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 43 / 47 Branch-and-Price Branch-and-Price

Difficulty in branching k,t Fix λk,t = 1, we choose x completely – very restrictive k,t Fix λk,t = 0, exclude just the one solution x !

Fixing λk,t = 0 also destroys subproblem structure: Must exclude the solution xk:

 > k max (ck − u Dk)x : x ∈ Xk, x 6= x

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 44 / 47 k,t Avoid generating new columns with xi 6= 0: In subproblem k, fix x = 0 i  > > max (ck − u Dk) x : x ∈ Xk, xi = 0 Often preserves subproblem structure

Similarly for alternative branch xki = 1

Branch-and-Price Branch-and-Price: Alternative Branching

Branch on the “original” problem variables xk

E.g. fix xki = 0 k,t k,t In LP Master Problem: Delete columns x with xi 6= 0 Fix λk,t = 0 for such columns if already generated

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 45 / 47 Similarly for alternative branch xki = 1

Branch-and-Price Branch-and-Price: Alternative Branching

Branch on the “original” problem variables xk

E.g. fix xki = 0 k,t k,t In LP Master Problem: Delete columns x with xi 6= 0 Fix λk,t = 0 for such columns if already generated k,t Avoid generating new columns with xi 6= 0: In subproblem k, fix x = 0 i  > > max (ck − u Dk) x : x ∈ Xk, xi = 0 Often preserves subproblem structure

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 45 / 47 Branch-and-Price Branch-and-Price: Alternative Branching

Branch on the “original” problem variables xk

E.g. fix xki = 0 k,t k,t In LP Master Problem: Delete columns x with xi 6= 0 Fix λk,t = 0 for such columns if already generated k,t Avoid generating new columns with xi 6= 0: In subproblem k, fix x = 0 i  > > max (ck − u Dk) x : x ∈ Xk, xi = 0 Often preserves subproblem structure

Similarly for alternative branch xki = 1

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 45 / 47 Branch-and-Price Branching in Cutting Stock

Branching is more difficult in cutting stock formulation Multiple patterns being selected Cannot simply say a pattern must make ≤ k or ≥ k + 1 of a product Would enforce that for all patterns

Must branch by adding constraints (not just bounds on variables) LP relaxations get larger Pricing problems become more complicated

We’ll skip these details

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 46 / 47 Commercial solvers don’t allow you to add variables in the branch-and-bound tree SCIP (scip.zib.de - free for academics) supports branch-and-price Generic Column Generation (www.or.rwth-aachen.de/gcg/)- based on SCIP Some open-source frameworks are available at www.coin-or.org BCP: Branch-cut-price CHiPPS and DIP – also supports Lagrangian relaxation ABACUS

Branch-and-Price How to Implement Branch-and-Price?

Solving the LP master problem is not too hard to implement Good news: in many applications it often yields a provably near-optimal solution!

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 47 / 47 SCIP (scip.zib.de - free for academics) supports branch-and-price Generic Column Generation (www.or.rwth-aachen.de/gcg/)- based on SCIP Some open-source frameworks are available at www.coin-or.org BCP: Branch-cut-price CHiPPS and DIP – also supports Lagrangian relaxation ABACUS

Branch-and-Price How to Implement Branch-and-Price?

Solving the LP master problem is not too hard to implement Good news: in many applications it often yields a provably near-optimal solution! Commercial solvers don’t allow you to add variables in the branch-and-bound tree

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 47 / 47 Branch-and-Price How to Implement Branch-and-Price?

Solving the LP master problem is not too hard to implement Good news: in many applications it often yields a provably near-optimal solution! Commercial solvers don’t allow you to add variables in the branch-and-bound tree SCIP (scip.zib.de - free for academics) supports branch-and-price Generic Column Generation (www.or.rwth-aachen.de/gcg/)- based on SCIP Some open-source frameworks are available at www.coin-or.org BCP: Branch-cut-price CHiPPS and DIP – also supports Lagrangian relaxation ABACUS

Jim Luedtke (UW-Madison) Decomposition Methods Lecture Notes 47 / 47