Stochastic Programming Modeling

IMA New Directions Short Course on Mathematical Optimization

Jeff Linderoth

Department of Industrial and Systems Engineering University of Wisconsin-Madison

August 8, 2016

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 1 / 77 Week #2

The first week focused on theory and algorithms for continuous optimization problems where problem parameters are known with certainty. This week we will focus on two different topics: 1 Stochastic Programming: Used for Optimization under data uncertainty 2 Integer Programming: Used for modeling discrete decisions

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 2 / 77 Stochastic Programming What is it?/Why Should we Do it? A Newsvendor Recourse Models and Extensive Form How to implement in a modeling language

Today’s Outline

About This Week About Us About You

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 3 / 77 Today’s Outline

About This Week About Us About You

Stochastic Programming What is it?/Why Should we Do it? A Newsvendor Recourse Models and Extensive Form How to implement in a modeling language

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 3 / 77 This Week Resources

Our exercises will be done with AMPL:AMathematicalProgramming Language We added you all to a Dropbox: There you can get AMPL, templates for the exercises, and the lecture slides.

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 4 / 77 This Week The Dream Team

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 5 / 77 This Week The Dream Team

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 5 / 77 This Week Optimization “Dream Team”

Monday: Dave Morton, Northwestern, Sample Average Approximation Tuesday: Shabbir Ahmed, Georgia Tech, Multistage Stochastic Programming Wednesday: Robert Hildebrand, IBM, Lenstra’s Algorithm Thursday: Santanu Dey, Georgia Tech, Cutting Plane Theory Friday: Dan Bienstock, Columbia, Mixed Integer Nonlinear Programming

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 6 / 77 This Week Week Overview—Social Events!

Monday: Stub and Herb’s

Wednesday: Twins Game

Thursday: Surly Brewing Company

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 7 / 77 Integer Programming ?: Classic reference. ?: A more gentle treatment ?: Very nice geometric intuition ?: My (new) favorite book

This Week Recommended Texts

Stochastic Programming ?: Very good. Requires strong math background ?: A more gentle introduction, but still covers the whole field quite well. ?: FREE!. It’s in the Dropbox

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 8 / 77 This Week Recommended Texts

Stochastic Programming ?: Very good. Requires strong math background ?: A more gentle introduction, but still covers the whole field quite well. ?: FREE!. It’s in the Dropbox

Integer Programming ?: Classic reference. ?: A more gentle treatment ?: Very nice geometric intuition ?: My (new) favorite book

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 8 / 77 This Week Course Level/Expectations

We will use AMPL (www..com) to solve problems and prototype algorithms: If nothing else, you can get to learn a new language for modeling and solving mathematical optimization problems We will do a few proofs, but we will not require significant mathematical sophistication beyond a reasonable understanding of LP duality We assume some basic background in probability theory (no measure theory required) – what is a random variable, expected value, law of large numbers, some basic statistics (CLT) We will expect some basic linear algebra knowledge

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 9 / 77 This Week About us...

B.S. (G.E.), UIUC, 1992. M.S., OR, GA Tech, 1994. Ph.D., GA Tech, 1998 1998-2000 : MCS, ANL 2000-2002 : Axioma, Inc. 2002-2007 : Lehigh University Research Areas: Large Scale Optimization, High Performance Computing. Married. One child, Jacob. Now 13. He is awesome. Hobbies: Golf, Integer Programming, Human Pyramids.

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 10 / 77 This Week About Jim...

B.S. (I.E.), UW-Madison, 2001 M.S., OR, GA Tech, 2004. Ph.D., GA Tech, 2007 2007-2008 : IBM 2008-2016 : UW-Madison Research Areas: Discrete Optimization, Stochastic Optimization, Applications Married. Three children: Rowan, Camerson, Remy. They are awesome Hobbies: Boxing, Integer Programming, Human Pyramids.

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 11 / 77 This Week Picture Time

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 12 / 77 This Week About You – Quiz #1!

1 Name 2 Nationality 3 Education Background. 4 Research Interests/Thesis Topic? 5 (Optimization) Modeling Languages you know: (AMPL, GAMS, Mosel, CVX, ... 6 Programming Languages you know: (C, Python, Matlab, Julia, FORTRAN, Java, ...) 7 Anything specific you hope to accomplish/learn this week? 8 One interesting fact about yourself you think we should know. 9 Do you like human pyramids? :-)

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 13 / 77 A. Planning. Mathematical Programming (Optimization) is about decision making, or planning. Stochastic Programming is about decision making under uncertainty. View it as “Mathematical Programming with random parameters”

Introduction to SP Background Stochastic Programming

$64 Question What does “Programming” mean in “Mathematical Programming”, “”, etc...?

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 14 / 77 Introduction to SP Background Stochastic Programming

$64 Question What does “Programming” mean in “Mathematical Programming”, “Linear Programming”, etc...?

A. Planning. Mathematical Programming (Optimization) is about decision making, or planning. Stochastic Programming is about decision making under uncertainty. View it as “Mathematical Programming with random parameters”

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 14 / 77 Introduction to SP Background Dealing With Randomness

In most applications of optimization, randomness is ignored Otherwise, it is dealt with via: Sensitivity analysis For large-scale problems, sensitivity analysis is useless “Careful” determination of instance parameters No matter how careful you are, you can’t get rid of inherent randomness. Stochastic Programming is the way!1

1This is not necessarily true, but we will assume it to be so for the next two days Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 15 / 77 Newsvendor Profit ( (s − c)x if x ≤ D F (x, D) = sD + r(x − D) − cx if x > D

Introduction to SP Newsvendor Hot Off the Presses

A paperboy (newsvendor) needs to decide how many papers to buy in order to maximize his profit. He doesn’t know at the beginning of the day how many papers he can sell (his demand). Each newspaper costs c. He can sell each newspaper for a price of s. He can return each unsold newspaper at the end of the day for r. (Note that s > c > r). The demand (unknown when we purchase papers) is D

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 16 / 77 Introduction to SP Newsvendor Hot Off the Presses

A paperboy (newsvendor) needs to decide how many papers to buy in order to maximize his profit. He doesn’t know at the beginning of the day how many papers he can sell (his demand). Each newspaper costs c. He can sell each newspaper for a price of s. He can return each unsold newspaper at the end of the day for r. (Note that s > c > r). The demand (unknown when we purchase papers) is D

Newsvendor Profit ( (s − c)x if x ≤ D F (x, D) = sD + r(x − D) − cx if x > D

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 16 / 77 Introduction to SP Newsvendor Pictures of Function

Marginal profit: (s − c) if can sell all: x ≤ D Marginal loss: (c − r) if have to salvage

F (x, D)

x x = D

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 17 / 77 This problem does not make sense! You can’t optimize something random!

http://en.wikipedia.org/wiki/ Chewbacca_defense

Introduction to SP Newsvendor What Should We Do?

Optimize, silly: max F (x, D). x≥0

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 18 / 77 Introduction to SP Newsvendor What Should We Do?

Optimize, silly: max F (x, D). x≥0

This problem does not make sense! You can’t optimize something random! http://en.wikipedia.org/wiki/ Chewbacca_defense Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 18 / 77 Introduction to SP Newsvendor The Function is “Random”

F (x, D1) F (x, D2)

x x x = D1 x = D2

One x can’t simultaneously optimize both functions

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 19 / 77 Introduction to SP Newsvendor (Silly) Idea #1

def Suppose D is a random variable with cdf H(t) = P(D ≤ t)

“Silly” Idea: Plan for Average Case def Let µ = E[D] be the mean value of demand In this case: (proof by picture)

max F (x, µ) ⇒ x∗ = µ. x≥0

In this case, the optimal policy is to purchase µ We will see that this can be far from optimal when your problem takes more uncertainty into account

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 20 / 77 Note that we can write: F (x, D) = min{(s − c)x, D(s − r) + (r − c)x}

max min F (x, D) = max min min{(s − c)x, D(s − r) + (r − c)x} x≥0 D∈[`,u] x≥0 D∈[`,u] = max min{(s − c)x, `(s − r) + (r − c)x} x≥0 = max F (x, `) ⇒ x∗ = ` x≥0 – say some nice things. But we will not cover in detail this week. (Give reference?)

Introduction to SP Newsvendor Idea #2 – Robust Plan for Worst Case Suppose D ∈ [`, u], and we wish to do the best we can given that the worst outcome for our objective will occur:

max min F (x, D) x≥0 D∈[`,u]

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 21 / 77 Introduction to SP Newsvendor Idea #2 – Robust Plan for Worst Case Suppose D ∈ [`, u], and we wish to do the best we can given that the worst outcome for our objective will occur:

max min F (x, D) x≥0 D∈[`,u]

Note that we can write: F (x, D) = min{(s − c)x, D(s − r) + (r − c)x}

max min F (x, D) = max min min{(s − c)x, D(s − r) + (r − c)x} x≥0 D∈[`,u] x≥0 D∈[`,u] = max min{(s − c)x, `(s − r) + (r − c)x} x≥0 = max F (x, `) ⇒ x∗ = ` x≥0 Robust optimization – say some nice things. But we will not cover in Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 21 / 77 detail this week. (Give reference?) In this case, the objective may make sense. The newsvendor will make a purchase every day

Introduction to SP Newsvendor Idea #3: Maximize Long-Run Profit

The “best” idea Treat F (x, D) as a proper random variable, and maximize long-run profit. i.e. solve the optimization problem:

max E[F (x, D)]. x≥0

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 22 / 77 Introduction to SP Newsvendor Idea #3: Maximize Long-Run Profit

The “best” idea Treat F (x, D) as a proper random variable, and maximize long-run profit. i.e. solve the optimization problem:

max E[F (x, D)]. x≥0

In this case, the objective may make sense. The newsvendor will make a purchase every day

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 22 / 77 With some old-school calculus (Chain rule, Fundmental theorem of calculus), one can show that the optimal closed form solution to the Newsvendor problem is  s − c  x∗ = H−1 s − r the (s − c)/(s − r) quantile of the distribution H

It Ain’t Always “That Easy” The newsvendor is about the only stochastic program that admits such a simple “closed form” solution. In general, we must solve instances numerically (and also approximately)

Introduction to SP Newsvendor Optimizing for the Newsvendor

Given only knowledge of the random variable D, given as the cdf HD(t), how many newspapers should the newsvendor buy?

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 23 / 77 It Ain’t Always “That Easy” The newsvendor is about the only stochastic program that admits such a simple “closed form” solution. In general, we must solve instances numerically (and also approximately)

Introduction to SP Newsvendor Optimizing for the Newsvendor

Given only knowledge of the random variable D, given as the cdf HD(t), how many newspapers should the newsvendor buy? With some old-school calculus (Chain rule, Fundmental theorem of calculus), one can show that the optimal closed form solution to the Newsvendor problem is  s − c  x∗ = H−1 s − r the (s − c)/(s − r) quantile of the distribution H

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 23 / 77 Introduction to SP Newsvendor Optimizing for the Newsvendor

Given only knowledge of the random variable D, given as the cdf HD(t), how many newspapers should the newsvendor buy? With some old-school calculus (Chain rule, Fundmental theorem of calculus), one can show that the optimal closed form solution to the Newsvendor problem is  s − c  x∗ = H−1 s − r the (s − c)/(s − r) quantile of the distribution H

It Ain’t Always “That Easy” The newsvendor is about the only stochastic program that admits such a simple “closed form” solution. In general, we must solve instances numerically (and also approximately)

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 23 / 77 Introduction to SP Newsvendor Simulating (with Scenarios)

newsboy.xls

s = 2, c = 0.3, r = 0.05 Demand: Normally distributed. µ = 100, σ = 20 Mean Value Solution Buy 100. (Duh!) TRUE long run profit ≈ 154 Stochastic Solution Buy 123 TRUE long run profit ≈ 162 The difference between the two solutions (162 − 154) is called the value of the stochastic solution.

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 24 / 77 Introduction to SP Newsvendor Do You Feel Lucky, Punk?

Should we always optimize the random variable F (x, D) in expectation? We may be “risk-averse”

min ρ[F (x, D)] x≥0

If ρ(a) = E[a]: Standard stochastic program If ρ(a) = E[a] + λV(a) for λ ∈ R, we have a “mean-variance” stochastic program Risk measures are discussed in the second lecture

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 25 / 77 Chance Constraints min{x | P {F (x, D) ≥ b} ≥ 1 − α} x≥0 Minimize the number of papers to purchase to ensure that the probability that you make at least b in profit is at least 1 − α Note that F (x, D) is a random variable

Jim will discuss this a bit as well

Introduction to SP Newsvendor Another Possible Newsvendor Problem

Suppose the newsvendor is lazy. He just wants to usually make enough money to go to Stub and Herb’s, but he doesn’t want to hurt his back carrying too may papers

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 26 / 77 Jim will discuss this a bit as well

Introduction to SP Newsvendor Another Possible Newsvendor Problem

Suppose the newsvendor is lazy. He just wants to usually make enough money to go to Stub and Herb’s, but he doesn’t want to hurt his back carrying too may papers

Chance Constraints min{x | P {F (x, D) ≥ b} ≥ 1 − α} x≥0 Minimize the number of papers to purchase to ensure that the probability that you make at least b in profit is at least 1 − α Note that F (x, D) is a random variable

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 26 / 77 Introduction to SP Newsvendor Another Possible Newsvendor Problem

Suppose the newsvendor is lazy. He just wants to usually make enough money to go to Stub and Herb’s, but he doesn’t want to hurt his back carrying too may papers

Chance Constraints min{x | P {F (x, D) ≥ b} ≥ 1 − α} x≥0 Minimize the number of papers to purchase to ensure that the probability that you make at least b in profit is at least 1 − α Note that F (x, D) is a random variable

Jim will discuss this a bit as well

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 26 / 77 Introduction to SP Newsvendor Take Away Message

The “Flaw” of Averages The flaw of averages occurs when uncertainties are replaced by “single average numbers” planning. Joke: Did you hear the one about the statistician who drowned fording a river with an average depth of three feet.

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 27 / 77 Introduction to SP Newsvendor Take-away Message: Point Estimates

If you are planning using point estimates, then you are planning sub-optimally It doesn’t matter how carefully you choose the point estimate— it is impossible to hedge against future uncertainty by considering one realization of the uncertainty in your planning process

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 28 / 77 1 We make a decision now (first-period decision) 2 Nature makes a random decision (“stuff” happens) 3 We make a second period decision that attempts to repair the havoc wrought by nature in (2). (recourse)

Key Idea The evolution of information is of paramount importance

Stages Stages and Decisions

The newsvendor problem is a classical “recourse problem”:

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 29 / 77 2 Nature makes a random decision (“stuff” happens) 3 We make a second period decision that attempts to repair the havoc wrought by nature in (2). (recourse)

Key Idea The evolution of information is of paramount importance

Stages Stages and Decisions

The newsvendor problem is a classical “recourse problem”:

1 We make a decision now (first-period decision)

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 29 / 77 3 We make a second period decision that attempts to repair the havoc wrought by nature in (2). (recourse)

Key Idea The evolution of information is of paramount importance

Stages Stages and Decisions

The newsvendor problem is a classical “recourse problem”:

1 We make a decision now (first-period decision) 2 Nature makes a random decision (“stuff” happens)

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 29 / 77 Key Idea The evolution of information is of paramount importance

Stages Stages and Decisions

The newsvendor problem is a classical “recourse problem”:

1 We make a decision now (first-period decision) 2 Nature makes a random decision (“stuff” happens) 3 We make a second period decision that attempts to repair the havoc wrought by nature in (2). (recourse)

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 29 / 77 Stages Stages and Decisions

The newsvendor problem is a classical “recourse problem”:

1 We make a decision now (first-period decision) 2 Nature makes a random decision (“stuff” happens) 3 We make a second period decision that attempts to repair the havoc wrought by nature in (2). (recourse)

Key Idea The evolution of information is of paramount importance

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 29 / 77 We showed that  s − c  x∗ = H−1 . s − r

Suppose that Ω = {d1, d2, . . . d|S|} So there are a finite set of scenarios S, each with associated P probability pj.( j∈S pj = 1)

Stages Newsvendor Again

Newsvendor Profit F (x, D) = min{(s − c)x, (s + r)D + (r − c)x}

D a random variable with cdf HD(t)

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 30 / 77 Suppose that Ω = {d1, d2, . . . d|S|} So there are a finite set of scenarios S, each with associated P probability pj.( j∈S pj = 1)

Stages Newsvendor Again

Newsvendor Profit F (x, D) = min{(s − c)x, (s + r)D + (r − c)x}

D a random variable with cdf HD(t) We showed that  s − c  x∗ = H−1 . s − r

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 30 / 77 Stages Newsvendor Again

Newsvendor Profit F (x, D) = min{(s − c)x, (s + r)D + (r − c)x}

D a random variable with cdf HD(t) We showed that  s − c  x∗ = H−1 . s − r

Suppose that Ω = {d1, d2, . . . d|S|} So there are a finite set of scenarios S, each with associated P probability pj.( j∈S pj = 1)

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 30 / 77 Variables x: Number to purchase

ys: Number to sell in scenario s

zs: Number to salvage in scenario s

Stages Newsvendor SP

Parameters

ds: Demand for newspapers in scenario s

ps: Probability of scenario s

Writing an optimization model for the newsvendor

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 31 / 77 Stages Newsvendor SP

Parameters

ds: Demand for newspapers in scenario s

ps: Probability of scenario s

Writing an optimization model for the newsvendor

Variables x: Number to purchase

ys: Number to sell in scenario s

zs: Number to salvage in scenario s

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 31 / 77 Stages Newsvendor Stochastic LP

X max −cx + ps(qys + rzs) s∈S s.t.

ys ≤ ds ∀s ∈ S

x − ys − zs = 0 ∀s ∈ S x ≥ 0

ys, zs ≥ 0 ∀s ∈ S

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 32 / 77 Q(x, D) is the optimal recourse function: Given that we have chosen x and observed demand D, what should I do to maximize profit?

Stages Put Another Way

We could write the objective for the newsvendor problem in the form:

F (x, D) = −cx + EQ(x, D),

where

Q(x, D) = max {qy + rz | y ≤ D, y + z = x}. y≥0,z≥0

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 33 / 77 Stages Put Another Way

We could write the objective for the newsvendor problem in the form:

F (x, D) = −cx + EQ(x, D),

where

Q(x, D) = max {qy + rz | y ≤ D, y + z = x}. y≥0,z≥0

Q(x, D) is the optimal recourse function: Given that we have chosen x and observed demand D, what should I do to maximize profit?

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 33 / 77 In fact, for two-stage stochastic linear programs, the recourse function will be the optimal value of a linear program

Stages It’s Not Always So Easy

For the newsvendor the recourse function: Q(x, D) has a simple closed form:

Q(x, D) = min{sx, sD + r(x − D)}

In general the recourse function may not be simple

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 34 / 77 Stages It’s Not Always So Easy

For the newsvendor the recourse function: Q(x, D) has a simple closed form:

Q(x, D) = min{sx, sD + r(x − D)}

In general the recourse function may not be simple In fact, for two-stage stochastic linear programs, the recourse function will be the optimal value of a linear program

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 34 / 77 What we CAN’T do Planners often generate a solution for each scenario generated—“What-if” analysis. Each solution yields a prescription of what should be done if the scenario occurs, but there is no theoretical guidance about the compromise between those prescriptions Can we “combine” these prescriptions in a natural way? Stochastic Programming does this!

Stages Scenario Modeling

The most common representation of uncertainty (in stochastic programming) is via a list of scenarios, which are specific representations of how the future will unfold. Think of these as random variables ξ1, ξ2, . . . ξS, with ξj ∈ Ξ

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 35 / 77 Stages Scenario Modeling

The most common representation of uncertainty (in stochastic programming) is via a list of scenarios, which are specific representations of how the future will unfold. Think of these as random variables ξ1, ξ2, . . . ξS, with ξj ∈ Ξ

What we CAN’T do Planners often generate a solution for each scenario generated—“What-if” analysis. Each solution yields a prescription of what should be done if the scenario occurs, but there is no theoretical guidance about the compromise between those prescriptions Can we “combine” these prescriptions in a natural way? Stochastic Programming does this!

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 35 / 77 Farmer Ted Background

Farmer Ted

In this example, the farmer has recourse that is, he can do something at step (3). Not just sell his newspapers. Farmer Ted can grow Wheat, Corn, or Beans on his 500 acres. Farmer Ted requires 200 tons of wheat and 240 tons of corn to feed his cattle These can be grown on his land or bought from a wholesaler.

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 36 / 77 Farmer Ted Background More Constraints

Any excess production can be sold for $170/ton (wheat) and $150/ton (corn) Any shortfall must be bought from the wholesaler at a cost of $238/ton (wheat) and $210/ton (corn). Farmer Ted can also grow beans Beans sell at $36/ton for the first 6000 tons Due to economic quotas on bean production, beans in excess of 6000 tons can only be sold at $10/ton

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 37 / 77 Farmer Ted Background The Data

500 acres available for planting

Wheat Corn Beans Yield (T/acre) 2.5 3 20 Planting Cost ($/acre) 150 230 260 Selling Price 170 150 36 (≤ 6000T) 10 (>6000T) Purchase Price 238 210 N/A Minimum Requirement 200 240 N/A

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 38 / 77 Farmer Ted Background Formulate the LP – Decision Variables

xW,C,B Acres of Wheat, Corn, Beans Planted

wW,C,B Tons of Wheat, Corn, Beans sold (at favorable price).

eB Tons of beans sold at lower price

yW,C Tons of Wheat, Corn purchased.

Note that Farmer Ted has recourse. After he observes the weather event, he can decide how much of each crop to sell or purchase!

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 39 / 77 Farmer Ted Background Formulation

max −150xW − 230xC − 260xB − 238yW + 170wW

−210yC + 150wC + 36wB + 10eB subject to

xW + xC + xB ≤ 500

2.5xW + yW − wW = 200

3xC + yC − wC = 240

20xB − wB − eB = 0

wB ≤ 6000

xW , xC , xB, yW , yC , eB, wW , wC , wB ≥ 0

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 40 / 77 Farmer Ted Background Solution with (expected) yields

Wheat Corn Beans Plant (acres) 120 80 300 Production 300 240 6000 Sales 100 0 6000 Purchase 0 0 0

Profit: $118,600

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 41 / 77 Farmer Ted Background It’s the Weather, Stupid!

Farmer Ted knows well enough to know that his yields aren’t always precisely Y = (2.5, 3, 20). He decides to run two more scenarios Good weather: 1.2Y Bad weather: 0.8Y .

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 42 / 77 Farmer Ted Making the SuperModel Creating a Stochastic Model

Here is a general procedure for making a (scenario-based) 2-stage stochastic optimization problem For a “nominal” state of nature (scenario), formulate an appropriate LP model Decide which decisions are made before uncertainty is revealed, and which are decided after All second stage variables get “scenario” index Constraints with scenario indices must hold for all scenarios Second stage variables in the objective function should be weighted by the probability of the scenario occurring

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 43 / 77 Farmer Ted Making the SuperModel What does this mean in our case?

First stage variables are the x (or planting variables) Second stage variables are the y, w, e (purchase and sale variables) We have one copy of the y, w, e for each scenario! Attach a scenario subscript s = 1, 2, 3 to each of the purchase and sale variables. 1: Good, 2: Average, 3: Bad

wC2 : Tons of corn sold at favorable price in scenario 2

eB3 : Tons of beans sold at unfavorable price in scenario 3.

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 44 / 77 Farmer Ted Making the SuperModel Expected Profit

The second stage cost for each submodel appears in the overall objective function weighted by the probability that nature will choose that scenario

−150xW − 230xC − 260xB

+1/3(−238yW 1 + 170wW 1 − 210yC1 + 150wC1 + 36wB1 + 10eB1)

+1/3(−238yW 2 + 170wW 2 − 210yC2 + 150wC2 + 36wB2 + 10eB2)

+1/3(−238yW 3 + 170wW 3 − 210yC3 + 150wC3 + 36wB3 + 10eB3)

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 45 / 77 Farmer Ted Making the SuperModel Constraints

xW + xC + xB ≤ 500

3xW + yW 1 − wW 1 = 200

2.5xW + yW 2 − wW 2 = 200

2xW + yW 3 − wW 3 = 200

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 46 / 77 Farmer Ted Making the SuperModel Constraints (cont.)

3.6xC + yC1 − wC1 = 240

3xC + yC2 − wC2 = 240

2.4xC + yC3 − wC3 = 240

24xB − wB1 − eB1 = 0

20xB − wB2 − eB2 = 0

16xB − wB3 − eB3 = 0

wB1, wB2, wB3 ≤ 6000 All vars ≥ 0

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 47 / 77 Farmer Ted Making the SuperModel Optimal Solution

Wheat Corn Beans s Plant (acres) 170 80 250 1 Production 510 288 6000 1 Sales 310 48 6000 1 Purchase 0 0 0 2 Production 425 240 5000 2 Sales 225 0 5000 2 Purchase 0 0 0 3 Production 340 192 4000 3 Sales 140 0 4000 3 Purchase 0 48 0

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 48 / 77 Farmer Ted Statistics:VSS The Value of the Stochastic Solution (VSS)

Suppose we just replaced the “random” quantities (the yields) by their mean values and solved that problem. Would we get the same expected value for the Farmer’s profit? How can we check? Solve the “mean-value” problem to get a first stage solution x. Fix the first stage solution at that value x, and solve all the scenarios to see Farmer Ted’s profit in each. Take the weighted (by probability) average of the optimal objective value for each scenario Alternatively (and probably faster), we can fix the x variables and solve the stochastic programming problem we created.

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 49 / 77 Farmer Ted Statistics:VSS Computing FT’s VSS

Mean yields Y = (2.5, 3, 20) (We already solved this problem).

xW = 120, xC = 80, xB = 300

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 50 / 77 Farmer Ted Statistics:VSS Fixed Policy – Average Yield Scenario maximize

−150xW − 230xC − 260xB − 238yW + 170wW − 210yC + 150yC + 36wB + 10eB subject to

xW = 120

xC = 80

xB = 300

xW + xC + xB ≤ 500

2.5xW + yW − wW = 200

3xC + yC − wC = 240

20xB − wB − eB = 0

wB ≤ 6000

xW , xC , xB , yW , yC , eB , wW , wC , wB ≥ 0

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 51 / 77 Farmer Ted Statistics:VSS Fixed Policy – Average Yield Scenario Solution

Wheat Corn Beans Plant (acres) 120 80 300 Production 300 240 6000 Sales 100 0 6000 Purchase 0 0 0

Profit: $118,600

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 52 / 77 Objective Value: $55,120

Farmer Ted Statistics:VSS Fixed Policy – Bad Yield Scenario maximize

−150xW − 230xC − 260xB − 238yW + 170wW − 210yC + 150yC + 36wB + 10eB subject to

xW = 120

xC = 80

xB = 300

xW + xC + xB ≤ 500

2xW + yW − wW = 200

2.4xC + yC − wC = 240

16xB − wB − eB = 0

wB ≤ 6000

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 53 / 77 Farmer Ted Statistics:VSS Fixed Policy – Bad Yield Scenario maximize

−150xW − 230xC − 260xB − 238yW + 170wW − 210yC + 150yC + 36wB + 10eB subject to

xW = 120

xC = 80

xB = 300

xW + xC + xB ≤ 500

2xW + yW − wW = 200

2.4xC + yC − wC = 240

16xB − wB − eB = 0

wB ≤ 6000

Objective Value: $55,120

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 53 / 77 Farmer Ted Statistics:VSS Fixed Policy – Good Yield Scenario maximize

−150xW − 230xC − 260xB − 238yW + 170wW − 210yC + 150yC + 36wB + 10eB subject to

xW = 120

xC = 80

xB = 300

xW + xC + xB ≤ 500

3xW + yW − wW = 200

3.6xC + yC − wC = 240

24xB − wB − eB = 0

wB ≤ 6000

Objective Value: $148,000

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 54 / 77 Farmer Ted Statistics:VSS What’s it Worth to Model Randomness?

If Farmer Ted implemented the policy based on using only “average” yields, he would plant xW = 120, xC = 80, xB = 300 He would expect in the long run to make an average profit of... 1/3(118600) + 1/3(55120) + 1/3(148000) = 107240 If Farmer Ted implemented the policy based on the solution to the stochastic programming problem, he would plant xW = 170, xC = 80, xB = 250. From this he would expect to make 108390

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 55 / 77 It would pay off $1150 per growing season for Farmer Ted to use the “stochastic” solution rather than the “mean value” solution. $1150 is precisely the “value” of implementing a planting policy based on the “stochastic solution”, rather than the mean-value solution.

Farmer Ted Statistics:VSS VSS

The difference of the values 180390-107240 is the Value of the Stochastic Solution : $1150.

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 56 / 77 Farmer Ted Statistics:VSS VSS

The difference of the values 180390-107240 is the Value of the Stochastic Solution : $1150.

It would pay off $1150 per growing season for Farmer Ted to use the “stochastic” solution rather than the “mean value” solution. $1150 is precisely the “value” of implementing a planting policy based on the “stochastic solution”, rather than the mean-value solution.

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 56 / 77 Farmer Ted Statistics:VSS (General) Stochastic Programming

A Stochastic Program def min f(x) = Eω[F (x, ξ(ω))] x∈X

2 Stage Stochastic LP w/Recourse

def The Recourse Problem F (x, ω) = cT x + Q(x, ω)

Q(x, ω) def= min q(ω)T y T c x: Pay me now W (ω)y = h(ω) − T (ω)x Q(x, ω): Pay me later y ≥ 0

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 57 / 77 > PS min c x + s=1 psQs(x) s.t. Ax ≥ b n1 x ∈ R+

where for s = 1,...,S

def > Qs(x) = Q(x, ωs) = min qs y s.t. Wsy = hs − Tsx n2 y ∈ R+

Extensive Form Two Stage Stochastic Linear Program

r Assume Ω = {ω1, ω2, . . . ωS} ⊆ R ,P(ω = ωs) = ps, ∀s = 1, 2,...,S def def def Ts = T (ωs), hs = h(ωs), qs = q(ωs),Ws = W (ωs)

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 58 / 77 Extensive Form Two Stage Stochastic Linear Program

r Assume Ω = {ω1, ω2, . . . ωS} ⊆ R ,P(ω = ωs) = ps, ∀s = 1, 2,...,S def def def Ts = T (ωs), hs = h(ωs), qs = q(ωs),Ws = W (ωs)

> PS min c x + s=1 psQs(x) s.t. Ax ≥ b n1 x ∈ R+ where for s = 1,...,S

def > Qs(x) = Q(x, ωs) = min qs y s.t. Wsy = hs − Tsx n2 y ∈ R+

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 58 / 77 Extensive Form Extensive Form

When we have a finite number of scenarios, or if we approximate the problem with a finite number of scenarios2, we can write an equivalent extensive form linear program:

T T T T c x + p1q1 y1 + p2q2 y2 + ··· + psqs ys s.t. Ax = b T1x + W1y1 = h1 T2x + W2y2 = h2 . . . . + .. . TSx + WSys = hs x ∈ X y1 ∈ Y y2 ∈ Y ys ∈ Y

2Stay Tuned for Dave Morton’s Lecture Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 59 / 77 T T T T c x + p1q1 y1 + p2q2 y2 + ··· + psqs ys s.t. Ax = b T1x + W1y1 = h1 T2x + W2y2 = h2 . . . . + .. . TSx + WSys = hs x ∈ X y1 ∈ Y y2 ∈ Y ys ∈ Y

Extensive Form The Upshot

This is just a larger linear program It is a larger linear program that also has special structure Jim explains how to exploit this structure tomorrow

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 60 / 77 Extensive Form The Upshot

This is just a larger linear program It is a larger linear program that also has special structure Jim explains how to exploit this structure tomorrow

T T T T c x + p1q1 y1 + p2q2 y2 + ··· + psqs ys s.t. Ax = b T1x + W1y1 = h1 T2x + W2y2 = h2 . . . . + .. . TSx + WSys = hs x ∈ X y1 ∈ Y y2 ∈ Y ys ∈ Y

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 60 / 77 Extensive Form Building the Supermodel

Weird Science A general technique for creating two-stage resource problems.

1 Write a nominal (one scenario) model 2 Decide which variables are first stage, and second stage 3 Give s scenario index to all second stage variables and random parameters 4 “Give context” to all scenarios

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 61 / 77 X X X min fixi + cijyij i∈I i∈I j∈J

X yij ≥ dj ∀j ∈ J i∈I X yij − uixi ≤ 0 ∀i ∈ I j∈J

xi ∈ {0, 1}, yij ≥ 0 ∀i ∈ I, ∀j ∈ J

Facility Location Example Facility Location and Distribution

Facilities: I Customers: J

Fixed cost fi, capacity ui for facility i ∈ I Demand dj: for j ∈ J Per unit Delivery cost: cij ∀i ∈ J, j ∈ J

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 62 / 77 Facility Location Example Facility Location and Distribution

Facilities: I Customers: J

Fixed cost fi, capacity ui for facility i ∈ I Demand dj: for j ∈ J Per unit Delivery cost: cij ∀i ∈ J, j ∈ J

X X X min fixi + cijyij i∈I i∈I j∈J

X yij ≥ dj ∀j ∈ J i∈I X yij − uixi ≤ 0 ∀i ∈ I j∈J

xi ∈ {0, 1}, yij ≥ 0 ∀i ∈ I, ∀j ∈ J

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 62 / 77 Facility Location Example AMPL for FL

AMPL Code

1 var x{I} binary; 2 var y{I,J} >= 0; 3 4 minimize Cost: 5 sum{i in I} f[i]*x[i] + sum{i in I, j in J} c[i,j]*y[i,j] ; 6 7 subject to MeetDemand{j in J}: 8 sum{i in I} y[i,j] >= d[j] ; 9 10 subject to FacCapacity{i in I}: 11 sum{j in J} y[i,j] - u[i]*x[i] <= 0 ;

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 63 / 77 First stage variables: xi

Second stage variables: yijs

Facility Location Example Evolution of Information

1 Build facilities now 2 Demand becomes known. One of the scenarios S = {d1, d2, . . . d|S|} happens 3 Meet demand from open facilities

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 64 / 77 Facility Location Example Evolution of Information

1 Build facilities now 2 Demand becomes known. One of the scenarios S = {d1, d2, . . . d|S|} happens 3 Meet demand from open facilities

First stage variables: xi

Second stage variables: yijs

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 64 / 77 Facility Location Example

The SuperModel

X X X X min fixi + ps cijyijs i∈I s∈S i∈I j∈J

X yijs ≥ djs ∀j ∈ J ∀s ∈ S i∈I X yijs − uixi ≤ 0 ∀i ∈ I, ∀s ∈ S j∈J

xi ∈ {0, 1}, yijs ≥ 0 ∀i ∈ I, ∀j ∈ J, ∀s ∈ S

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 65 / 77 Two Ideas 1 We could penalize not meeting demand of customers. 2 We only want to meet demand “most of the time”. (Chance constraint)

Facility Location Example

Modeling Discussion

Do we always want to meet demand? Regardless of the outcome ds? What happens on the off chance that our product is so popular that we can’t possibly meet demand, even if we opened all of the facilities?

Does the world end?

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 66 / 77 Facility Location Example

Modeling Discussion

Do we always want to meet demand? Regardless of the outcome ds? What happens on the off chance that our product is so popular that we can’t possibly meet demand, even if we opened all of the facilities?

Does the world end?

Two Ideas 1 We could penalize not meeting demand of customers. 2 We only want to meet demand “most of the time”. (Chance constraint)

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 66 / 77 Facility Location Example SP Definitions

A 2-stage stochastic optimization problem has complete recourse if for every scenario, there always exists a feasible second solution:

n Qs(x) < +∞ ∀x ∈ R , ∀s = 1,...,S

A 2-stage stochastic optimization problem has relatively complete recourse if for every scenario, and for every feasible first stage solution, there always exists a feasible second solution:

Qs(x) < +∞ ∀x ∈ X, ∀s = 1,...,S

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 67 / 77 Facility Location Example Penalize Shortfall: A Recourse Formulation

  X X X X min fixi + ps  cijyijs + λejs i∈I s∈S i∈I j∈J

X yijs + ejs ≥ djs ∀j ∈ J ∀s ∈ S i∈I X yijs − uixi ≤ 0 ∀i ∈ I, ∀s ∈ S j∈J

xi ∈ {0, 1}, yij ≥ 0 ∀i ∈ I, ∀j ∈ J

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 68 / 77 AMPL Hints Stop. AMPL Time.

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 69 / 77 AMPL Hints AMPL Hints

1 All chapters of AMPL book are available for download: http: //ampl.com/resources/the-ampl-book/chapter-downloads/ 2 You can change the solver with the command option solver ; (or replace cplex with baron, conopt, , knitro, loqo, minos, snopt, xpress.) 3 Use var to declare variables; You may also put >= 0 on the same line if the variables are constrained to be non-negative. 4 One your AMPL model is complete, you can type model ; at the ampl: prompt. This will tell you if you have syntax errors. 5 If you have syntax errors. Fix them. Save the file, and type reset; Then go to 4. 6 If no errors, type solve;

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 70 / 77 AMPL Hints AMPL Entities

Data Sets: lists of products, materials, etc. Parameters: numerical inputs such as costs, etc. Model Variables: The values to be decided upon. Objective Function. Constraints.

Data and Model typically stored in different files!

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 71 / 77 AMPL Hints Template of Typical AMPL File

Define Sets Define Parameters Define Variables Also can define variable bound constraints in this section Define Objective Define Constraints

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 72 / 77 AMPL Hints Important AMPL Keywords/Syntax

model file.mod; data file.mod; reset; quit;

set param var maximize (minimize) subject to

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 73 / 77 Learning Data Input Look at examples Look at Chapter 9 of AMPL Book: http: //ampl.com/resources/the-ampl-book/chapter-downloads/

AMPL Hints Important AMPL Notes

The # character starts a comment All statements must end in a semi-colon; Names must be unique! A variable and a constraint cannot have the same name AMPL is case sensitive. Keywords must be in lower case. Even if the AMPL error message is cryptic, look at the location where it shows an error – this will often help you deduce what is wrong.

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 74 / 77 AMPL Hints Important AMPL Notes

The # character starts a comment All statements must end in a semi-colon; Names must be unique! A variable and a constraint cannot have the same name AMPL is case sensitive. Keywords must be in lower case. Even if the AMPL error message is cryptic, look at the location where it shows an error – this will often help you deduce what is wrong.

Learning Data Input Look at examples Look at Chapter 9 of AMPL Book: http: //ampl.com/resources/the-ampl-book/chapter-downloads/

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 74 / 77 AMPL Hints Some AMPL Tips

option show stats 1; shows the problem size

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 75 / 77 AMPL Hints Conclusions

Replacing uncertain parameters with point estimates may lead to sub-optimal planning: the flaw of averages Two-stage recourse problems: Decision → Event → Decision The Value of the Stochastic Solution Creating the extensive form/supermodel

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 76 / 77 Let xmv be an optimal solution to the “mean-value” problem: xmv ∈ arg min F (x, E[ξ(ω)]) x∈X

Let zmv be the long run cost if you plan based on the policy obtained from the ’average’ scenario: def zmv = EF (xmv, ξ(ω))

Value of Stochastic Solution def vss = zmv − zs

Simple HW: Prove vss ≥ 0

AMPL Hints VSS: Value of the Stochastic Solution Let zs be the optimal solution value to def zs = min E[F (x, ξ(ω))] x∈X

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 77 / 77 Let zmv be the long run cost if you plan based on the policy obtained from the ’average’ scenario: def zmv = EF (xmv, ξ(ω))

Value of Stochastic Solution def vss = zmv − zs

Simple HW: Prove vss ≥ 0

AMPL Hints VSS: Value of the Stochastic Solution Let zs be the optimal solution value to def zs = min E[F (x, ξ(ω))] x∈X

Let xmv be an optimal solution to the “mean-value” problem: xmv ∈ arg min F (x, E[ξ(ω)]) x∈X

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 77 / 77 Value of Stochastic Solution def vss = zmv − zs

Simple HW: Prove vss ≥ 0

AMPL Hints VSS: Value of the Stochastic Solution Let zs be the optimal solution value to def zs = min E[F (x, ξ(ω))] x∈X

Let xmv be an optimal solution to the “mean-value” problem: xmv ∈ arg min F (x, E[ξ(ω)]) x∈X

Let zmv be the long run cost if you plan based on the policy obtained from the ’average’ scenario: def zmv = EF (xmv, ξ(ω))

Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 77 / 77 AMPL Hints VSS: Value of the Stochastic Solution Let zs be the optimal solution value to def zs = min E[F (x, ξ(ω))] x∈X

Let xmv be an optimal solution to the “mean-value” problem: xmv ∈ arg min F (x, E[ξ(ω)]) x∈X

Let zmv be the long run cost if you plan based on the policy obtained from the ’average’ scenario: def zmv = EF (xmv, ξ(ω))

Value of Stochastic Solution def vss = zmv − zs

Simple HW: Prove vss ≥ 0 Jeff Linderoth (UW-Madison) Stochastic Programming Modeling Lecture Notes 77 / 77