Math 407A: Linear Optimization

Lecture 12: The of Linear Programming

Math Dept, University of Washington

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 1 / 49 1 The Geometry of Linear Programming Hyperplanes Convex Polyhedra Vertices

2 The Geometry of Degeneracy

3 The Geometry of Duality

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 2 / 49 Fact: H ⊂ Rn is a hyperplane if and only if the set

H − x0 = {x − x0 : x ∈ H}

n where x0 ∈ H is a subspace of R of (n − 1).

The Geometry of Linear Programming

Hyperplanes

Definition: A hyperplane in Rn is any set of the form H(a, β) = {x : aT x = β}

where a ∈ Rn \{0} and β ∈ R.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 3 / 49 The Geometry of Linear Programming

Hyperplanes

Definition: A hyperplane in Rn is any set of the form H(a, β) = {x : aT x = β}

where a ∈ Rn \{0} and β ∈ R.

Fact: H ⊂ Rn is a hyperplane if and only if the set

H − x0 = {x − x0 : x ∈ H}

n where x0 ∈ H is a subspace of R of dimension (n − 1).

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 3 / 49 Points

What are the hyperplanes in R2? Lines

What are the hyperplanes in R3? Planes

What are the hyperplanes in Rn? Translates of (n − 1) dimensional subspaces.

Hyperplanes

What are the hyperplanes in R?

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 4 / 49 What are the hyperplanes in R2? Lines

What are the hyperplanes in R3? Planes

What are the hyperplanes in Rn? Translates of (n − 1) dimensional subspaces.

Hyperplanes

What are the hyperplanes in R? Points

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 4 / 49 Lines

What are the hyperplanes in R3? Planes

What are the hyperplanes in Rn? Translates of (n − 1) dimensional subspaces.

Hyperplanes

What are the hyperplanes in R? Points

What are the hyperplanes in R2?

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 4 / 49 What are the hyperplanes in R3? Planes

What are the hyperplanes in Rn? Translates of (n − 1) dimensional subspaces.

Hyperplanes

What are the hyperplanes in R? Points

What are the hyperplanes in R2? Lines

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 4 / 49 Planes

What are the hyperplanes in Rn? Translates of (n − 1) dimensional subspaces.

Hyperplanes

What are the hyperplanes in R? Points

What are the hyperplanes in R2? Lines

What are the hyperplanes in R3?

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 4 / 49 What are the hyperplanes in Rn? Translates of (n − 1) dimensional subspaces.

Hyperplanes

What are the hyperplanes in R? Points

What are the hyperplanes in R2? Lines

What are the hyperplanes in R3? Planes

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 4 / 49 Translates of (n − 1) dimensional subspaces.

Hyperplanes

What are the hyperplanes in R? Points

What are the hyperplanes in R2? Lines

What are the hyperplanes in R3? Planes

What are the hyperplanes in Rn?

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 4 / 49 Hyperplanes

What are the hyperplanes in R? Points

What are the hyperplanes in R2? Lines

What are the hyperplanes in R3? Planes

What are the hyperplanes in Rn? Translates of (n − 1) dimensional subspaces.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 4 / 49 This division defines two closed half-spaces.

The two closed half-spaces associated with the hyperplane

are n T H+(a, β) = {x ∈ R : a x ≥ β} and n T H−(a, β) = {x ∈ R : a x ≤ β}.

Hyperplanes

Every hyperplane divides the in half.

H(a, β) = {x : aT x = β}

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 5 / 49 The two closed half-spaces associated with the hyperplane

are n T H+(a, β) = {x ∈ R : a x ≥ β} and n T H−(a, β) = {x ∈ R : a x ≤ β}.

Hyperplanes

Every hyperplane divides the space in half. This division defines two closed half-spaces.

H(a, β) = {x : aT x = β}

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 5 / 49 are n T H+(a, β) = {x ∈ R : a x ≥ β} and n T H−(a, β) = {x ∈ R : a x ≤ β}.

Hyperplanes

Every hyperplane divides the space in half. This division defines two closed half-spaces.

The two closed half-spaces associated with the hyperplane

H(a, β) = {x : aT x = β}

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 5 / 49 and n T H−(a, β) = {x ∈ R : a x ≤ β}.

Hyperplanes

Every hyperplane divides the space in half. This division defines two closed half-spaces.

The two closed half-spaces associated with the hyperplane

H(a, β) = {x : aT x = β}

are n T H+(a, β) = {x ∈ R : a x ≥ β}

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 5 / 49 Hyperplanes

Every hyperplane divides the space in half. This division defines two closed half-spaces.

The two closed half-spaces associated with the hyperplane

H(a, β) = {x : aT x = β}

are n T H+(a, β) = {x ∈ R : a x ≥ β} and n T H−(a, β) = {x ∈ R : a x ≤ β}.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 5 / 49 Define the half-spaces

T Hj = {x : ej x ≥ 0} for j = 1,..., n

and T Hn+i = {x : ai· x ≤ bi } for i = 1,..., m,

where ai· is the ith row of A. Then n+m \ Ω = Hk . k=1

That is, the constraint region of an LP is the intersection of finitely many closed half-spaces.

Intersections of Closed Half-Spaces

Consider the constraint region to an LP

Ω = {x : Ax ≤ b, 0 ≤ x}.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 6 / 49 Then n+m \ Ω = Hk . k=1

That is, the constraint region of an LP is the intersection of finitely many closed half-spaces.

Intersections of Closed Half-Spaces

Consider the constraint region to an LP

Ω = {x : Ax ≤ b, 0 ≤ x}.

Define the half-spaces

T Hj = {x : ej x ≥ 0} for j = 1,..., n

and T Hn+i = {x : ai· x ≤ bi } for i = 1,..., m,

where ai· is the ith row of A.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 6 / 49 That is, the constraint region of an LP is the intersection of finitely many closed half-spaces.

Intersections of Closed Half-Spaces

Consider the constraint region to an LP

Ω = {x : Ax ≤ b, 0 ≤ x}.

Define the half-spaces

T Hj = {x : ej x ≥ 0} for j = 1,..., n

and T Hn+i = {x : ai· x ≤ bi } for i = 1,..., m,

where ai· is the ith row of A. Then n+m \ Ω = Hk . k=1

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 6 / 49 Intersections of Closed Half-Spaces

Consider the constraint region to an LP

Ω = {x : Ax ≤ b, 0 ≤ x}.

Define the half-spaces

T Hj = {x : ej x ≥ 0} for j = 1,..., n

and T Hn+i = {x : ai· x ≤ bi } for i = 1,..., m,

where ai· is the ith row of A. Then n+m \ Ω = Hk . k=1

That is, the constraint region of an LP is the intersection of finitely many closed half-spaces.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 6 / 49 If a convex polyhedron in Rn is contained within a set of the form {x | ` ≤ x ≤ u } ,

where `, u ∈ Rn with ` ≤ u, then it is called a convex .

A linear program is simply the problem of either maximizing or minimizing a linear function over a convex polyhedron.

We now develop the geometry of convex polyhedra.

Convex Polyhedra

Definition: Any subset of Rn that can be represented as the intersection of finitely many closed half spaces is called a convex polyhedron.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 7 / 49 A linear program is simply the problem of either maximizing or minimizing a linear function over a convex polyhedron.

We now develop the geometry of convex polyhedra.

Convex Polyhedra

Definition: Any subset of Rn that can be represented as the intersection of finitely many closed half spaces is called a convex polyhedron.

If a convex polyhedron in Rn is contained within a set of the form {x | ` ≤ x ≤ u } ,

where `, u ∈ Rn with ` ≤ u, then it is called a convex polytope.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 7 / 49 We now develop the geometry of convex polyhedra.

Convex Polyhedra

Definition: Any subset of Rn that can be represented as the intersection of finitely many closed half spaces is called a convex polyhedron.

If a convex polyhedron in Rn is contained within a set of the form {x | ` ≤ x ≤ u } ,

where `, u ∈ Rn with ` ≤ u, then it is called a convex polytope.

A linear program is simply the problem of either maximizing or minimizing a linear function over a convex polyhedron.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 7 / 49 Convex Polyhedra

Definition: Any subset of Rn that can be represented as the intersection of finitely many closed half spaces is called a convex polyhedron.

If a convex polyhedron in Rn is contained within a set of the form {x | ` ≤ x ≤ u } ,

where `, u ∈ Rn with ` ≤ u, then it is called a convex polytope.

A linear program is simply the problem of either maximizing or minimizing a linear function over a convex polyhedron.

We now develop the geometry of convex polyhedra.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 7 / 49 Definition: A subset C ∈ Rn is said to be convex if [x, y] ⊂ C whenever x, y ∈ C.

Fact: A convex polyhedron is a .

Convex sets

Fact: Given any two points in Rn, say x and y, the segment connecting them is given by [x, y] = {(1 − λ)x + λy : 0 ≤ λ ≤ 1}.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 8 / 49 Fact: A convex polyhedron is a convex set.

Convex sets

Fact: Given any two points in Rn, say x and y, the line segment connecting them is given by [x, y] = {(1 − λ)x + λy : 0 ≤ λ ≤ 1}.

Definition: A subset C ∈ Rn is said to be convex if [x, y] ⊂ C whenever x, y ∈ C.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 8 / 49 Convex sets

Fact: Given any two points in Rn, say x and y, the line segment connecting them is given by [x, y] = {(1 − λ)x + λy : 0 ≤ λ ≤ 1}.

Definition: A subset C ∈ Rn is said to be convex if [x, y] ⊂ C whenever x, y ∈ C.

Fact: A convex polyhedron is a convex set.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 8 / 49 8 6  24 18  The vertices are v1 = 7 , 7 , v2 = (0, 2), and v3 = 5 , 5 .

Example

\tex{$x_2$}

\tex{5}

\tex{4}

\tex{$v_3$}

\tex{3}

\tex{$c_3$} c1 : −x1 − x2 ≤ −2

c2 : 3x1 − 4x2 ≤ 0 \tex{$v_2$} \tex{2} \tex{$C$} c3 : −x1 + 3x2 ≤ 6 \tex{$c_2$}

\tex{$c_1$}

\tex{1}

\tex{$v_1$}

\tex{$x_1$} \tex{1} \tex{2} \tex{3} \tex{4} \tex{5}

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 9 / 49 Example

\tex{$x_2$}

\tex{5}

\tex{4}

\tex{$v_3$}

\tex{3}

\tex{$c_3$} c1 : −x1 − x2 ≤ −2

c2 : 3x1 − 4x2 ≤ 0 \tex{$v_2$} \tex{2} \tex{$C$} c3 : −x1 + 3x2 ≤ 6 \tex{$c_2$}

\tex{$c_1$}

\tex{1}

\tex{$v_1$}

\tex{$x_1$} \tex{1} \tex{2} \tex{3} \tex{4} \tex{5}

8 6  24 18  The vertices are v1 = 7 , 7 , v2 = (0, 2), and v3 = 5 , 5 .

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 9 / 49 The Fundamental Representation Theorem for Vertices m n Let T = (tij )m×n, g ∈ R , and consider the convex polyhedron C := {x ∈ R | Tx ≤ g }. A x ∈ C is a vertex of C if and only if there exist an index set I ⊂ {1,..., m} such that x is the unique solution to the system of equations

n X tij xj = gi i ∈ I. j=1

Moreover, if x is a vertex, then one can take |I| = n, where |I| denotes the number of elements in I.

Vertices

Definition: Let C be a convex polyhedron. We say that x ∈ C is a vertex of C if whenever x ∈ [u, v] for some u, v ∈ C, it must be the case that either x = u or x = v.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 10 / 49 Vertices

Definition: Let C be a convex polyhedron. We say that x ∈ C is a vertex of C if whenever x ∈ [u, v] for some u, v ∈ C, it must be the case that either x = u or x = v.

The Fundamental Representation Theorem for Vertices m n Let T = (tij )m×n, g ∈ R , and consider the convex polyhedron C := {x ∈ R | Tx ≤ g }. A point x ∈ C is a vertex of C if and only if there exist an index set I ⊂ {1,..., m} such that x is the unique solution to the system of equations

n X tij xj = gi i ∈ I. j=1

Moreover, if x is a vertex, then one can take |I| = n, where |I| denotes the number of elements in I.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 10 / 49 |I| ≥ n; otherwise, one solution implies infinitely many solutions.

If |I| > n, we can select a subset R ⊂ I of the rows Ti· of T so that the set of vectors {Ti· | i ∈ R} form a basis of the row space of T . Then |R| = n and x is the unique solution to n X tij xj = gi i ∈ R. j=1

Observations

When does the system of equations

n X tij xj = gi i ∈ I j=1

have a unique solution?

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 11 / 49 If |I| > n, we can select a subset R ⊂ I of the rows Ti· of T so that the set of vectors {Ti· | i ∈ R} form a basis of the row space of T . Then |R| = n and x is the unique solution to n X tij xj = gi i ∈ R. j=1

Observations

When does the system of equations

n X tij xj = gi i ∈ I j=1

have a unique solution?

|I| ≥ n; otherwise, one solution implies infinitely many solutions.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 11 / 49 Observations

When does the system of equations

n X tij xj = gi i ∈ I j=1

have a unique solution?

|I| ≥ n; otherwise, one solution implies infinitely many solutions.

If |I| > n, we can select a subset R ⊂ I of the rows Ti· of T so that the set of vectors {Ti· | i ∈ R} form a basis of the row space of T . Then |R| = n and x is the unique solution to n X tij xj = gi i ∈ R. j=1

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 11 / 49 Vertices

Corollary: A point x in the convex polyhedron described by the system of inequalities Ax ≤ b and 0 ≤ x,

where A = (aij )m×n, is a vertex of this polyhedron if and only if there exist index sets I ⊂ {1,..., m} and J ⊂ {1,..., n} with |I| + |J | = n such that x is the unique solution to the system of equations

n X aij xj = bi i ∈ I, and j=1

xj = 0 j ∈ J .

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 12 / 49 8 6 (a)The vertex v1 = ( 7 , 7 ) is given as the solution to the system

−x1 − x2 = −2

3x1 − 4x2 = 0,

Example

c1 : −x1 − x2 ≤ −2 c2 : 3x1 − 4x2 ≤ 0 c3 : −x1 + 3x2 ≤ 6

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 13 / 49 Example

c1 : −x1 − x2 ≤ −2 c2 : 3x1 − 4x2 ≤ 0 c3 : −x1 + 3x2 ≤ 6

8 6 (a)The vertex v1 = ( 7 , 7 ) is given as the solution to the system

−x1 − x2 = −2

3x1 − 4x2 = 0,

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 13 / 49 Example

c1 : −x1 − x2 ≤ −2 c2 : 3x1 − 4x2 ≤ 0 c3 : −x1 + 3x2 ≤ 6

(b)The vertex v2 = (0, 2) is given as the solution to the system

−x1 − x2 = −2

−x1 + 3x2 = 6,

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 14 / 49 Example

c1 : −x1 − x2 ≤ −2 c2 : 3x1 − 4x2 ≤ 0 c3 : −x1 + 3x2 ≤ 6

24 18  (c)The vertex v3 = 5 , 5 is given as the solution to the system

3x1 − 4x2 = 0

−x1 + 3x2 = 6.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 15 / 49 Letx ¯ = (¯x1,..., x¯n+m) be any solution to the system ♣.

J = {j ∈⊂ {1,..., n} | x¯j = 0}I = {j ∈ {1,..., m} | x¯n+i = 0}}

Let xb = (¯x1,..., x¯n) be the values for the decision variables atx ¯.

Application to LPs in Standard Form

n X aij xj ≤ bi i = 1,..., m j=1

0 ≤ xj j = 1,..., n.

The associated slack variables:

n X xn+i = bi − aij xj i = 1,..., m. ♣ j=1

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 16 / 49 Let xb = (¯x1,..., x¯n) be the values for the decision variables atx ¯.

Application to LPs in Standard Form

n X aij xj ≤ bi i = 1,..., m j=1

0 ≤ xj j = 1,..., n.

The associated slack variables:

n X xn+i = bi − aij xj i = 1,..., m. ♣ j=1

Letx ¯ = (¯x1,..., x¯n+m) be any solution to the system ♣.

J = {j ∈⊂ {1,..., n} | x¯j = 0}I = {j ∈ {1,..., m} | x¯n+i = 0}}

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 16 / 49 Application to LPs in Standard Form

n X aij xj ≤ bi i = 1,..., m j=1

0 ≤ xj j = 1,..., n.

The associated slack variables:

n X xn+i = bi − aij xj i = 1,..., m. ♣ j=1

Letx ¯ = (¯x1,..., x¯n+m) be any solution to the system ♣.

J = {j ∈⊂ {1,..., n} | x¯j = 0}I = {j ∈ {1,..., m} | x¯n+i = 0}}

Let xb = (¯x1,..., x¯n) be the values for the decision variables atx ¯.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 16 / 49 Similarly, for each i ∈ I ⊂ {1, 2,..., m},x ¯n+i = 0, and so the hyperplane

n n X Hn+i = {x ∈ R : aij xj = bi } j=1

is active at xb, i.e., xb ∈ Hn+i .

Application to LPs in Standard Form

For each j ∈ J ⊂ {1,..., n},x ¯j = 0, consequently the hyperplane

n T Hj = {x ∈ R : ej x = 0}

is active at xb, i.e., xb ∈ Hj .

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 17 / 49 Application to LPs in Standard Form

For each j ∈ J ⊂ {1,..., n},x ¯j = 0, consequently the hyperplane

n T Hj = {x ∈ R : ej x = 0}

is active at xb, i.e., xb ∈ Hj .

Similarly, for each i ∈ I ⊂ {1, 2,..., m},x ¯n+i = 0, and so the hyperplane

n n X Hn+i = {x ∈ R : aij xj = bi } j=1

is active at xb, i.e., xb ∈ Hn+i .

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 17 / 49 xb = (¯x1,..., x¯n) is a vertex of this polyhedron if and only if there exist index sets I ⊂ {1,..., m} and J ∈ {1,..., n} with |I| + |J | = n such that xb is the unique solution to the systemn of equations X aij xj = bi i ∈ I, and xj = 0 j ∈ J . j=1

In this casex ¯m+i = 0 for i ∈ I (slack variables).

Application to LPs in Standard Form

What are the vertices of the system

n X aij xj ≤ bi i = 1,..., m j=1

0 ≤ xj j = 1,..., n

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 18 / 49 In this casex ¯m+i = 0 for i ∈ I (slack variables).

Application to LPs in Standard Form

What are the vertices of the system

n X aij xj ≤ bi i = 1,..., m j=1

0 ≤ xj j = 1,..., n

xb = (¯x1,..., x¯n) is a vertex of this polyhedron if and only if there exist index sets I ⊂ {1,..., m} and J ∈ {1,..., n} with |I| + |J | = n such that xb is the unique solution to the systemn of equations X aij xj = bi i ∈ I, and xj = 0 j ∈ J . j=1

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 18 / 49 Application to LPs in Standard Form

What are the vertices of the system

n X aij xj ≤ bi i = 1,..., m j=1

0 ≤ xj j = 1,..., n

xb = (¯x1,..., x¯n) is a vertex of this polyhedron if and only if there exist index sets I ⊂ {1,..., m} and J ∈ {1,..., n} with |I| + |J | = n such that xb is the unique solution to the systemn of equations X aij xj = bi i ∈ I, and xj = 0 j ∈ J . j=1

In this casex ¯m+i = 0 for i ∈ I (slack variables).

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 18 / 49 and BFSs

But then, xb is a vertex if and only if it is a BFS!

Therefore, one can geometrically interpret the algorithm as a procedure moving from one vertex of the constraint polyhedron to another with higher objective value until the optimal solution exists.

Vertices

That is, xb is a vertex of the polyhedral constraints to an LP in standard form if and only if a total of n of the variables {x¯1, x¯2,..., x¯n+m} take the value zero, while the value of the remaining m variables are uniquely determined by setting these n variables to the value zero.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 19 / 49 Therefore, one can geometrically interpret the simplex algorithm as a procedure moving from one vertex of the constraint polyhedron to another with higher objective value until the optimal solution exists.

Vertices and BFSs

That is, xb is a vertex of the polyhedral constraints to an LP in standard form if and only if a total of n of the variables {x¯1, x¯2,..., x¯n+m} take the value zero, while the value of the remaining m variables are uniquely determined by setting these n variables to the value zero.

But then, xb is a vertex if and only if it is a BFS!

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 19 / 49 Vertices and BFSs

That is, xb is a vertex of the polyhedral constraints to an LP in standard form if and only if a total of n of the variables {x¯1, x¯2,..., x¯n+m} take the value zero, while the value of the remaining m variables are uniquely determined by setting these n variables to the value zero.

But then, xb is a vertex if and only if it is a BFS!

Therefore, one can geometrically interpret the simplex algorithm as a procedure moving from one vertex of the constraint polyhedron to another with higher objective value until the optimal solution exists.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 19 / 49 Vertices and BFSs

The simplex algorithm terminates finitely since every vertex is connected to every other vertex by a path of adjacent vertices on the surface of the polyhedron.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 20 / 49 Example

\tex{$x_2$} maximize 3x1 + 4x2 \tex{$V_3$} \tex{$V_4$} subject to −2x1 + x2 ≤ 2 2x1 − x2 ≤ 4 0 ≤ x1 ≤ 3, 0 ≤ x2 ≤ 4. \tex{$3$}

\tex{$V_2$} \tex{$2$} \tex{$V_5$}

\tex{$1$}

\tex{$V_6$} \tex{$V_1$} \tex{$x_1$} \tex{$1$} \tex{$2$} \tex{$3$}

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 21 / 49 Example

0 1 0 0 0 1 4 vertex -2 1 1 0 0 0 2 vertex 0 0 1 1 0 0 6 v3 2 -1 0 1 0 0 4 v 1 0 0 1 0 1 − 1 2 (1, 4) 1 0 0 0 1 0 3 (0, 0) 2 2 1 1 0 1 0 0 0 1 4 1 0 − 2 0 0 2 1 3 −11 3 4 0 0 0 0 0 0 0 2 0 0 2 -19 -2 1 1 0 0 0 2 vertex 0 0 1 1 0 0 6 v2 0 1 0 0 0 1 4 vertex 1 0 0 0 1 0 3 (0, 2) 0 0 0 1 -2 1 2 v4 2 0 -1 0 0 1 2 0 0 1 0 2 -1 4 (3, 4) 11 0 -4 0 0 0 -8 1 0 0 0 1 0 3 0 0 0 0 -3 -4 -25

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 22 / 49 Vertex Pivoting

The BSFs in the simplex algorithm are vertices, and every vertex of the polyhedral constraint region is a BFS.

Phase I of the simplex algorithm is a procedure for finding a vertex of the constraint region, while Phase II is a procedure for moving between adjacent vertices successively increasing the value of the objective function.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 23 / 49 Ω is the intersection of the hyperplanes

T Hj = {x : ej x ≥ 0} for j = 1,..., n

and n X Hn+i = {x : aij xj ≤ bi } for i = 1,..., m j=1 A basic feasible solution (vertex) is said to be degenerate if one or more of the basic variables is assigned the value zero. This implies that more than n of the hyperplanes Hk , k = 1, 2,..., n + m are active at this vertex.

The Geometry of Degeneracy

Let Ω = {x : Ax ≤ b, 0 ≤ x} be the constraint region for an LP in standard form.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 24 / 49 A basic feasible solution (vertex) is said to be degenerate if one or more of the basic variables is assigned the value zero. This implies that more than n of the hyperplanes Hk , k = 1, 2,..., n + m are active at this vertex.

The Geometry of Degeneracy

Let Ω = {x : Ax ≤ b, 0 ≤ x} be the constraint region for an LP in standard form. Ω is the intersection of the hyperplanes

T Hj = {x : ej x ≥ 0} for j = 1,..., n

and n X Hn+i = {x : aij xj ≤ bi } for i = 1,..., m j=1

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 24 / 49 The Geometry of Degeneracy

Let Ω = {x : Ax ≤ b, 0 ≤ x} be the constraint region for an LP in standard form. Ω is the intersection of the hyperplanes

T Hj = {x : ej x ≥ 0} for j = 1,..., n

and n X Hn+i = {x : aij xj ≤ bi } for i = 1,..., m j=1 A basic feasible solution (vertex) is said to be degenerate if one or more of the basic variables is assigned the value zero. This implies that more than n of the hyperplanes Hk , k = 1, 2,..., n + m are active at this vertex.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 24 / 49 \tex{$x_2$}

\tex{$V_3$} \tex{$V_4$}

\tex{$3$}

\tex{$V_2$} \tex{$2$} \tex{$V_5$}

\tex{$1$}

\tex{$V_6$} \tex{$V_1$} \tex{$x_1$} \tex{$1$} \tex{$2$} \tex{$3$}

Example

maximize 3x1 + 4x2 subject to −2x1 + x2 ≤ 2 2x1 − x2 ≤ 4 −x1 + x2 ≤ 3 x1 + x2 ≤ 7 0 ≤ x1 ≤ 3, 0 ≤ x2 ≤ 4.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 25 / 49 Example \tex{$x_2$}

\tex{$V_3$} \tex{$V_4$}

\tex{$3$}

maximize 3x1 + 4x2 subject to −2x1 + x2 ≤ 2 \tex{$V_2$} 2x1 − x2 ≤ 4 \tex{$2$} \tex{$V_5$} −x1 + x2 ≤ 3 x1 + x2 ≤ 7 0 ≤ x1 ≤ 3, 0 ≤ x2 ≤ 4. \tex{$1$}

\tex{$V_6$} \tex{$V_1$} \tex{$x_1$} \tex{$1$} \tex{$2$} \tex{$3$}

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 25 / 49 -2 1 1 0 0 0 0 0 2 vertex 0 0 1 1 0 0 0 0 6 V2 = (0, 2) 1 0 -1 0 1 0 0 0 1 3 0 -1 0 0 1 0 0 5 1 0 0 0 0 0 1 0 3 2 0 -1 0 0 0 0 1 2 11 0 -4 0 0 0 0 0 -8

Example

-2 1 1 0 0 0 0 0 2 vertex 2 -1 0 1 0 0 0 0 4 V1 = (0, 0) -1 1 0 0 1 0 0 0 3 1 1 0 0 0 1 0 0 7 1 0 0 0 0 0 1 0 3 0 1 0 0 0 0 0 1 4 3 4 0 0 0 0 0 0 0

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 26 / 49 Example

-2 1 1 0 0 0 0 0 2 vertex 2 -1 0 1 0 0 0 0 4 V1 = (0, 0) -1 1 0 0 1 0 0 0 3 1 1 0 0 0 1 0 0 7 1 0 0 0 0 0 1 0 3 0 1 0 0 0 0 0 1 4 3 4 0 0 0 0 0 0 0 -2 1 1 0 0 0 0 0 2 vertex 0 0 1 1 0 0 0 0 6 V2 = (0, 2) 1 0 -1 0 1 0 0 0 1 3 0 -1 0 0 1 0 0 5 1 0 0 0 0 0 1 0 3 2 0 -1 0 0 0 0 1 2 11 0 -4 0 0 0 0 0 -8

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 26 / 49 0 1 -1 0 2 0 0 0 4 vertex 0 0 1 1 0 0 0 0 6 V3 = (1, 4) 1 0 -1 0 1 0 0 0 1 0 0 2 0 -3 1 0 0 2 0 0 1 0 -1 0 1 0 2 0 0 1 0 -2 0 0 1 0 degenerate 0 0 7 0 -11 0 0 0 -19

Example

-2 1 1 0 0 0 0 0 2 vertex 0 0 1 1 0 0 0 0 6 V2 = (0, 2) 1 0 -1 0 1 0 0 0 1 3 0 -1 0 0 1 0 0 5 1 0 0 0 0 0 1 0 3 2 0 -1 0 0 0 0 1 2 11 0 -4 0 0 0 0 0 -8

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 27 / 49 Example

-2 1 1 0 0 0 0 0 2 vertex 0 0 1 1 0 0 0 0 6 V2 = (0, 2) 1 0 -1 0 1 0 0 0 1 3 0 -1 0 0 1 0 0 5 1 0 0 0 0 0 1 0 3 2 0 -1 0 0 0 0 1 2 11 0 -4 0 0 0 0 0 -8 0 1 -1 0 2 0 0 0 4 vertex 0 0 1 1 0 0 0 0 6 V3 = (1, 4) 1 0 -1 0 1 0 0 0 1 0 0 2 0 -3 1 0 0 2 0 0 1 0 -1 0 1 0 2 0 0 1 0 -2 0 0 1 0 degenerate 0 0 7 0 -11 0 0 0 -19

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 27 / 49 0 1 0 0 0 0 0 1 4 vertex 0 0 0 1 2 0 0 1 6 V3 = (1, 4) 1 0 0 0 -1 0 0 1 1 0 0 0 0 1 1 0 -2 2 0 0 0 0 1 0 1 -1 2 0 0 1 0 -2 0 0 1 0 degenerate 0 0 0 0 3 0 0 -7 -19

Example

0 1 -1 0 2 0 0 0 4 vertex 0 0 1 1 0 0 0 0 6 V3 = (1, 4) 1 0 -1 0 1 0 0 0 1 0 0 2 0 -3 1 0 0 2 0 0 1 0 -1 0 1 0 2 0 0 1 0 -2 0 0 1 0 degenerate 0 0 7 0 -11 0 0 0 -19

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 28 / 49 Example

0 1 -1 0 2 0 0 0 4 vertex 0 0 1 1 0 0 0 0 6 V3 = (1, 4) 1 0 -1 0 1 0 0 0 1 0 0 2 0 -3 1 0 0 2 0 0 1 0 -1 0 1 0 2 0 0 1 0 -2 0 0 1 0 degenerate 0 0 7 0 -11 0 0 0 -19 0 1 0 0 0 0 0 1 4 vertex 0 0 0 1 2 0 0 1 6 V3 = (1, 4) 1 0 0 0 -1 0 0 1 1 0 0 0 0 1 1 0 -2 2 0 0 0 0 1 0 1 -1 2 0 0 1 0 -2 0 0 1 0 degenerate 0 0 0 0 3 0 0 -7 -19

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 28 / 49 0 1 0 0 0 0 0 1 4 vertex 0 0 0 1 0 -2 0 5 2 V4 = (3, 4) 1 0 0 0 0 1 0 -1 3 0 0 0 0 1 1 0 -2 2 optimal 0 0 0 0 0 -1 1 1 0 degenerate 0 0 1 0 0 2 0 -3 4 0 0 0 0 0 -3 0 -1 -25

Example

0 1 0 0 0 0 0 1 4 vertex 0 0 0 1 2 0 0 1 6 V3 = (1, 4) 1 0 0 0 -1 0 0 1 1 0 0 0 0 1 1 0 -2 2 0 0 0 0 1 0 1 -1 2 0 0 1 0 -2 0 0 1 0 degenerate 0 0 0 0 3 0 0 -7 -19

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 29 / 49 Example

0 1 0 0 0 0 0 1 4 vertex 0 0 0 1 2 0 0 1 6 V3 = (1, 4) 1 0 0 0 -1 0 0 1 1 0 0 0 0 1 1 0 -2 2 0 0 0 0 1 0 1 -1 2 0 0 1 0 -2 0 0 1 0 degenerate 0 0 0 0 3 0 0 -7 -19 0 1 0 0 0 0 0 1 4 vertex 0 0 0 1 0 -2 0 5 2 V4 = (3, 4) 1 0 0 0 0 1 0 -1 3 0 0 0 0 1 1 0 -2 2 optimal 0 0 0 0 0 -1 1 1 0 degenerate 0 0 1 0 0 2 0 -3 4 0 0 0 0 0 -3 0 -1 -25

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 29 / 49 A degenerate pivot occurs when we move between two different representations of a vertex as the intersection of n hyperplanes.

Cycling implies that we are cycling between different representations of the same vertex.

Degeneracy = Multiple Representations of a Vertex

A degenerate tableau occurs when the associated BFS (or vertex) can be represented as the intersection point of more than one subsets of n active hyperplanes.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 30 / 49 Cycling implies that we are cycling between different representations of the same vertex.

Degeneracy = Multiple Representations of a Vertex

A degenerate tableau occurs when the associated BFS (or vertex) can be represented as the intersection point of more than one subsets of n active hyperplanes.

A degenerate pivot occurs when we move between two different representations of a vertex as the intersection of n hyperplanes.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 30 / 49 Degeneracy = Multiple Representations of a Vertex

A degenerate tableau occurs when the associated BFS (or vertex) can be represented as the intersection point of more than one subsets of n active hyperplanes.

A degenerate pivot occurs when we move between two different representations of a vertex as the intersection of n hyperplanes.

Cycling implies that we are cycling between different representations of the same vertex.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 30 / 49 The third pivot brings us to the 4th tableau where the vertex V3 = (1, 4) is represented as the intersection of the hyperplanes

−x1 + x2 = 3 (since x5 = 0)

x2 =4 (since x8 = 0). and

Degeneracy = Multiple Representations of a Vertex

In the previous example, the third tableau represents the vertex V3 = (1, 4) as the intersection of the hyperplanes

−2x1 + x2 = 2 (since x3 = 0)

−x1 + x2 =3. (since x5 = 0) and

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 31 / 49 Degeneracy = Multiple Representations of a Vertex

In the previous example, the third tableau represents the vertex V3 = (1, 4) as the intersection of the hyperplanes

−2x1 + x2 = 2 (since x3 = 0)

−x1 + x2 =3. (since x5 = 0) and

The third pivot brings us to the 4th tableau where the vertex V3 = (1, 4) is represented as the intersection of the hyperplanes

−x1 + x2 = 3 (since x5 = 0)

x2 =4 (since x8 = 0). and

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 31 / 49 0 1 0 0 0 0 0 0 4 primal solution 0 0 0 1 0 0 -2 3 2 v4 = (3, 4) 1 0 0 0 0 0 1 0 3 0 0 0 0 1 0 1 -1 2 dual 0 0 0 0 0 1 -1 -1 0 solution 0 0 1 0 0 0 2 -1 4 (0,0,0,0,3,4) 0 0 0 0 0 0 -3 -4 -25

Multiple Dual Optimal Solutions and Degeneracy

0 1 0 0 0 0 0 1 4 primal solution 0 0 0 1 0 -2 0 5 2 v4 = (3, 4) 1 0 0 0 0 1 0 -1 3 0 0 0 0 1 1 0 -2 2 dual 0 0 0 0 0 -1 1 1 0 solution 0 0 1 0 0 2 0 -3 4 (0,0,0,3,0,1) 0 0 0 0 0 -3 0 -1 -25

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 32 / 49 Multiple Dual Optimal Solutions and Degeneracy

0 1 0 0 0 0 0 1 4 primal solution 0 0 0 1 0 -2 0 5 2 v4 = (3, 4) 1 0 0 0 0 1 0 -1 3 0 0 0 0 1 1 0 -2 2 dual 0 0 0 0 0 -1 1 1 0 solution 0 0 1 0 0 2 0 -3 4 (0,0,0,3,0,1) 0 0 0 0 0 -3 0 -1 -25 0 1 0 0 0 0 0 0 4 primal solution 0 0 0 1 0 0 -2 3 2 v4 = (3, 4) 1 0 0 0 0 0 1 0 3 0 0 0 0 1 0 1 -1 2 dual 0 0 0 0 0 1 -1 -1 0 solution 0 0 1 0 0 0 2 -1 4 (0,0,0,0,3,4) 0 0 0 0 0 0 -3 -4 -25

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 32 / 49 Dual degeneracy in an optimal tableau indicates multiple optimal primal solutions that can be obtained with primal simplex pivots.

A tableau is said to be dual degenerate if there is a non-basic variable whose objective row coefficient is zero.

Multiple Dual Optima and Primal Degeneracy

Primal degeneracy in an optimal tableau indicates multiple optimal solutions to the dual which can be obtained with dual simplex pivots.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 33 / 49 A tableau is said to be dual degenerate if there is a non-basic variable whose objective row coefficient is zero.

Multiple Dual Optima and Primal Degeneracy

Primal degeneracy in an optimal tableau indicates multiple optimal solutions to the dual which can be obtained with dual simplex pivots.

Dual degeneracy in an optimal tableau indicates multiple optimal primal solutions that can be obtained with primal simplex pivots.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 33 / 49 Multiple Dual Optima and Primal Degeneracy

Primal degeneracy in an optimal tableau indicates multiple optimal solutions to the dual which can be obtained with dual simplex pivots.

Dual degeneracy in an optimal tableau indicates multiple optimal primal solutions that can be obtained with primal simplex pivots.

A tableau is said to be dual degenerate if there is a non-basic variable whose objective row coefficient is zero.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 33 / 49 Multiple Primal Optima and Dual Degeneracy

50 0 0 100 0 1 −10 5 500 2.5 1 0 2 0 0 −.1 .15 15 primal −.5 0 0 0 1 0 0 −.05 15 solution −1 0 1 −1 0 0 .1 −.1 10 (0, 15, 10, 0) −100 0 0 0 0 0 −10 −10 −11000 .5 0 0 1 0 .01 −.1 .05 5 1.5 1 0 0 0 −.02 .1 .05 5 primal −.5 0 0 0 1 0 0 −.05 15 solution −.5 0 1 0 0 .01 0 −.05 15 (0, 5, 15, 5) −100 0 0 0 0 0 −10 −10 −11000

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 34 / 49 The Geometry of Duality

\tex{$n_1=(-1,2)$}

\tex{$c=(3,1)$}

\tex{3} max 3x1 + x2 s.t. −x1 + 2x2 ≤ 4 3x1 − x2 ≤ 3 \tex{2} 0 ≤ x1, x2. \tex{$n_2=(3,-1)$}

\tex{1}

\tex{1} \tex{2} \tex{3}

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 35 / 49 The to the hyperplane 3x1 − x2 = 3 is n2 = (3, −1).

The Geometry of Duality

The normal to the hyperplane −x1 + 2x2 = 4 is n1 = (−1, 2). \tex{$n_1=(-1,2)$}

\tex{$c=(3,1)$}

\tex{3}

\tex{2} \tex{$n_2=(3,-1)$}

\tex{1}

\tex{1} \tex{2} \tex{3}

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 36 / 49 The Geometry of Duality

The normal to the hyperplane −x1 + 2x2 = 4 is n1 = (−1, 2). \tex{$n_1=(-1,2)$} The normal to the \tex{$c=(3,1)$} hyperplane

3x1 − x2 = 3 \tex{3} is n2 = (3, −1).

\tex{2} \tex{$n_2=(3,-1)$}

\tex{1}

\tex{1} \tex{2} \tex{3}

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 36 / 49 c = y1n1 + y2n2, Equivalently

 3   −1   3  = y + y 1 1 2 2 −1  −1 3   y  = 1 . 2 −1 y2

The Geometry of Duality

The objective normal c = (3, 1) can be written as a non-negative of the active constraint normals n1 = (−1, 2) and n2 = (3, −1) .

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 37 / 49 Equivalently

 3   −1   3  = y + y 1 1 2 2 −1  −1 3   y  = 1 . 2 −1 y2

The Geometry of Duality

The objective normal c = (3, 1) can be written as a non-negative linear combination of the active constraint normals n1 = (−1, 2) and n2 = (3, −1) .

c = y1n1 + y2n2,

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 37 / 49 The Geometry of Duality

The objective normal c = (3, 1) can be written as a non-negative linear combination of the active constraint normals n1 = (−1, 2) and n2 = (3, −1) .

c = y1n1 + y2n2, Equivalently

 3   −1   3  = y + y 1 1 2 2 −1  −1 3   y  = 1 . 2 −1 y2

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 37 / 49 6 y1 = 5 7 y2 = 5

6 7 We claim that y = ( 5 , 5 ) is the optimal solution to the dual!

The Geometry of Duality

-1 3 3 2 -1 1 1 -3 -3 0 5 7 1 -3 -3 7 0 1 5

6 1 0 5 7 0 1 5

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 38 / 49 6 7 We claim that y = ( 5 , 5 ) is the optimal solution to the dual!

The Geometry of Duality

-1 3 3 2 -1 1 1 -3 -3 6 0 5 7 y1 = 5 1 -3 -3 7 7 y2 = 5 0 1 5

6 1 0 5 7 0 1 5

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 38 / 49 The Geometry of Duality

-1 3 3 2 -1 1 1 -3 -3 6 0 5 7 y1 = 5 1 -3 -3 7 7 y2 = 5 0 1 5

6 1 0 5 7 0 1 5 6 7 We claim that y = ( 5 , 5 ) is the optimal solution to the dual!

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 38 / 49 D min 4y1 + 3y2 s.t. −y1 + 3y2 ≥ 3 2y1 − y2 ≥ 1 0 ≤ y1, y2.

Primal Solution − − Dual Solution (2, 3) (6/5, 7/5)

Optimal Value = 9

The Geometry of Duality

P max 3x1 + x2 s.t. −x1 + 2x2 ≤ 4 3x1 − x2 ≤ 3 0 ≤ x1, x2.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 39 / 49 Primal Solution − − Dual Solution (2, 3) (6/5, 7/5)

Optimal Value = 9

The Geometry of Duality

P D max 3x1 + x2 min 4y1 + 3y2 s.t. −x1 + 2x2 ≤ 4 s.t. −y1 + 3y2 ≥ 3 3x1 − x2 ≤ 3 2y1 − y2 ≥ 1 0 ≤ x1, x2. 0 ≤ y1, y2.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 39 / 49 − − Dual Solution (6/5, 7/5)

Optimal Value = 9

The Geometry of Duality

P D max 3x1 + x2 min 4y1 + 3y2 s.t. −x1 + 2x2 ≤ 4 s.t. −y1 + 3y2 ≥ 3 3x1 − x2 ≤ 3 2y1 − y2 ≥ 1 0 ≤ x1, x2. 0 ≤ y1, y2.

Primal Solution (2, 3)

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 39 / 49 Optimal Value = 9

The Geometry of Duality

P D max 3x1 + x2 min 4y1 + 3y2 s.t. −x1 + 2x2 ≤ 4 s.t. −y1 + 3y2 ≥ 3 3x1 − x2 ≤ 3 2y1 − y2 ≥ 1 0 ≤ x1, x2. 0 ≤ y1, y2.

Primal Solution − − Dual Solution (2, 3) (6/5, 7/5)

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 39 / 49 The Geometry of Duality

P D max 3x1 + x2 min 4y1 + 3y2 s.t. −x1 + 2x2 ≤ 4 s.t. −y1 + 3y2 ≥ 3 3x1 − x2 ≤ 3 2y1 − y2 ≥ 1 0 ≤ x1, x2. 0 ≤ y1, y2.

Primal Solution − − Dual Solution (2, 3) (6/5, 7/5)

Optimal Value = 9

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 39 / 49 Geometric Duality Theorem

T m×n Consider the LP (P) max{c x | Ax ≤ b, 0 ≤ x}, where A ∈ R . Given a vectorx ¯ that is feasible for P, define

n X Z(¯x) = {j ∈ {1, 2,..., n} :x ¯j = 0}, E(¯x) = {i ∈ {1,..., m} : aij x¯j = bi }. j=1

The indices Z(¯x) and E(¯x) are the active indices atx ¯ and correspond to the active hyperplanes atx ¯. Thenx ¯ solves P if and only if there exist non-negative numbers rj , j ∈ Z(¯x) andy ¯i , i ∈ E(¯x) such that X X c = − rj ej + y¯i ai• j∈Z(¯x) i∈E(¯x)

T T where for each i = 1,..., m, ai• = (ai1, ai2,..., ain) is the ith column of the matrix A , and, for each j = 1,..., n, ej is the jth unit coordinate vector. In addition, ifx ¯ is the m solution to P, then the vectory ¯ ∈ R given by  y¯ for i ∈ E(¯x) y¯ = i , solves the dual problem. i 0 otherwise

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 40 / 49 The Complementary Slackness Theorem implies that Pn (I )¯ yi = 0 for i ∈ {1, 2,..., m}\E(¯x)( aij x¯j < bi ) and j=1 m X (II ) y¯i aij = cj for j ∈ {1,..., n}\Z(¯x)(0 < x¯j ). i=1 T Define r = A y¯ − c ≥ 0. By (II), rj = 0 for j ∈ {1,..., n}\Z(¯x), while

m X (III ) cj = −rj + y¯i aij for j ∈ Z(¯x). i=1 (I), (II), and (III) gives

X T X X c = − rj ej + A y¯ = − rj ej + y¯i ai•. j∈Z(¯x) j∈Z(¯x) i∈E(¯x)

Geometric Duality Theorem: Proof

First suppose thatx ¯ solves P, and lety ¯ solve D.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 41 / 49 T Define r = A y¯ − c ≥ 0. By (II), rj = 0 for j ∈ {1,..., n}\Z(¯x), while

m X (III ) cj = −rj + y¯i aij for j ∈ Z(¯x). i=1 (I), (II), and (III) gives

X T X X c = − rj ej + A y¯ = − rj ej + y¯i ai•. j∈Z(¯x) j∈Z(¯x) i∈E(¯x)

Geometric Duality Theorem: Proof

First suppose thatx ¯ solves P, and lety ¯ solve D. The Complementary Slackness Theorem implies that Pn (I )¯ yi = 0 for i ∈ {1, 2,..., m}\E(¯x)( aij x¯j < bi ) and j=1 m X (II ) y¯i aij = cj for j ∈ {1,..., n}\Z(¯x)(0 < x¯j ). i=1

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 41 / 49 By (II), rj = 0 for j ∈ {1,..., n}\Z(¯x), while

m X (III ) cj = −rj + y¯i aij for j ∈ Z(¯x). i=1 (I), (II), and (III) gives

X T X X c = − rj ej + A y¯ = − rj ej + y¯i ai•. j∈Z(¯x) j∈Z(¯x) i∈E(¯x)

Geometric Duality Theorem: Proof

First suppose thatx ¯ solves P, and lety ¯ solve D. The Complementary Slackness Theorem implies that Pn (I )¯ yi = 0 for i ∈ {1, 2,..., m}\E(¯x)( aij x¯j < bi ) and j=1 m X (II ) y¯i aij = cj for j ∈ {1,..., n}\Z(¯x)(0 < x¯j ). i=1 Define r = AT y¯ − c ≥ 0.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 41 / 49 , while

m X (III ) cj = −rj + y¯i aij for j ∈ Z(¯x). i=1 (I), (II), and (III) gives

X T X X c = − rj ej + A y¯ = − rj ej + y¯i ai•. j∈Z(¯x) j∈Z(¯x) i∈E(¯x)

Geometric Duality Theorem: Proof

First suppose thatx ¯ solves P, and lety ¯ solve D. The Complementary Slackness Theorem implies that Pn (I )¯ yi = 0 for i ∈ {1, 2,..., m}\E(¯x)( aij x¯j < bi ) and j=1 m X (II ) y¯i aij = cj for j ∈ {1,..., n}\Z(¯x)(0 < x¯j ). i=1 T Define r = A y¯ − c ≥ 0. By (II), rj = 0 for j ∈ {1,..., n}\Z(¯x)

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 41 / 49 (I), (II), and (III) gives

X T X X c = − rj ej + A y¯ = − rj ej + y¯i ai•. j∈Z(¯x) j∈Z(¯x) i∈E(¯x)

Geometric Duality Theorem: Proof

First suppose thatx ¯ solves P, and lety ¯ solve D. The Complementary Slackness Theorem implies that Pn (I )¯ yi = 0 for i ∈ {1, 2,..., m}\E(¯x)( aij x¯j < bi ) and j=1 m X (II ) y¯i aij = cj for j ∈ {1,..., n}\Z(¯x)(0 < x¯j ). i=1 T Define r = A y¯ − c ≥ 0. By (II), rj = 0 for j ∈ {1,..., n}\Z(¯x), while

m X (III ) cj = −rj + y¯i aij for j ∈ Z(¯x). i=1

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 41 / 49 Geometric Duality Theorem: Proof

First suppose thatx ¯ solves P, and lety ¯ solve D. The Complementary Slackness Theorem implies that Pn (I )¯ yi = 0 for i ∈ {1, 2,..., m}\E(¯x)( aij x¯j < bi ) and j=1 m X (II ) y¯i aij = cj for j ∈ {1,..., n}\Z(¯x)(0 < x¯j ). i=1 T Define r = A y¯ − c ≥ 0. By (II), rj = 0 for j ∈ {1,..., n}\Z(¯x), while

m X (III ) cj = −rj + y¯i aij for j ∈ Z(¯x). i=1 (I), (II), and (III) gives

X T X X c = − rj ej + A y¯ = − rj ej + y¯i ai•. j∈Z(¯x) j∈Z(¯x) i∈E(¯x)

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 41 / 49 m Sety ¯i = 0 6∈ E(¯x) to obtainy ¯ ∈ R . Then

T X X X A y¯ = y¯i ai• ≥ − rj ej + y¯i ai• = c, i∈E(¯x) j∈Z(¯x) i∈E(¯x)

so thaty ¯ is feasible for D. Moreover,

T X T X T X T T T c x¯ = − rj ej x¯ + y¯i ai•x¯ = y¯i ai•x¯ =y ¯ Ax¯ =y ¯ b, j∈Z(¯x) i∈E(¯x) i∈E(¯x)

sox ¯ solves P andy ¯ solves D by the Weak Duality Theorem.

Geometric Duality Theorem: Proof

Conversely, supposex ¯ is feasible for P and 0 ≤ rj , j ∈ Z(¯x) and 0 ≤ y¯i , i ∈ E(¯x) satisfy

X T X X c = − rj ej + A y¯ = − rj ej + y¯i ai• j∈Z(¯x) j∈Z(¯x) i∈E(¯x)

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 42 / 49 Then

T X X X A y¯ = y¯i ai• ≥ − rj ej + y¯i ai• = c, i∈E(¯x) j∈Z(¯x) i∈E(¯x)

so thaty ¯ is feasible for D. Moreover,

T X T X T X T T T c x¯ = − rj ej x¯ + y¯i ai•x¯ = y¯i ai•x¯ =y ¯ Ax¯ =y ¯ b, j∈Z(¯x) i∈E(¯x) i∈E(¯x)

sox ¯ solves P andy ¯ solves D by the Weak Duality Theorem.

Geometric Duality Theorem: Proof

Conversely, supposex ¯ is feasible for P and 0 ≤ rj , j ∈ Z(¯x) and 0 ≤ y¯i , i ∈ E(¯x) satisfy

X T X X c = − rj ej + A y¯ = − rj ej + y¯i ai• j∈Z(¯x) j∈Z(¯x) i∈E(¯x)

m Sety ¯i = 0 6∈ E(¯x) to obtainy ¯ ∈ R .

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 42 / 49 Moreover,

T X T X T X T T T c x¯ = − rj ej x¯ + y¯i ai•x¯ = y¯i ai•x¯ =y ¯ Ax¯ =y ¯ b, j∈Z(¯x) i∈E(¯x) i∈E(¯x)

sox ¯ solves P andy ¯ solves D by the Weak Duality Theorem.

Geometric Duality Theorem: Proof

Conversely, supposex ¯ is feasible for P and 0 ≤ rj , j ∈ Z(¯x) and 0 ≤ y¯i , i ∈ E(¯x) satisfy

X T X X c = − rj ej + A y¯ = − rj ej + y¯i ai• j∈Z(¯x) j∈Z(¯x) i∈E(¯x)

m Sety ¯i = 0 6∈ E(¯x) to obtainy ¯ ∈ R . Then

T X X X A y¯ = y¯i ai• ≥ − rj ej + y¯i ai• = c, i∈E(¯x) j∈Z(¯x) i∈E(¯x)

so thaty ¯ is feasible for D.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 42 / 49 Geometric Duality Theorem: Proof

Conversely, supposex ¯ is feasible for P and 0 ≤ rj , j ∈ Z(¯x) and 0 ≤ y¯i , i ∈ E(¯x) satisfy

X T X X c = − rj ej + A y¯ = − rj ej + y¯i ai• j∈Z(¯x) j∈Z(¯x) i∈E(¯x)

m Sety ¯i = 0 6∈ E(¯x) to obtainy ¯ ∈ R . Then

T X X X A y¯ = y¯i ai• ≥ − rj ej + y¯i ai• = c, i∈E(¯x) j∈Z(¯x) i∈E(¯x)

so thaty ¯ is feasible for D. Moreover,

T X T X T X T T T c x¯ = − rj ej x¯ + y¯i ai•x¯ = y¯i ai•x¯ =y ¯ Ax¯ =y ¯ b, j∈Z(¯x) i∈E(¯x) i∈E(¯x)

sox ¯ solves P andy ¯ solves D by the Weak Duality Theorem.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 42 / 49 Example

Does the vectorx ¯ = (1, 0, 2, 0)T solve the LP

maximize x1 +x2 −x3 +2x4 subject to x1 +3x2 −2x3 +4x4 ≤ −3 4x2 −2x3 +3x4 ≤ 1 −x2 +x3 −x4 ≤ 2 −x1 −x2 +2x3 −x5 ≤ 4 0 ≤ x1, x2, x3, x4 .

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 43 / 49 = < so y2 = 0 = < so y4 = 0

The 1st and 3rd constraints are active.

Example

Which constraints are active atx ¯ = (1, 0, 2, 0)T ?

x1 +3x2 −2x3 +4x4 ≤ −3 4x2 −2x3 +3x4 ≤ 1 −x2 +x3 −x4 ≤ 2 −x1 −x2 +2x3 −x5 ≤ 4

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 44 / 49 < so y2 = 0 = < so y4 = 0

The 1st and 3rd constraints are active.

Example

Which constraints are active atx ¯ = (1, 0, 2, 0)T ?

x1 +3x2 −2x3 +4x4 ≤ −3 = 4x2 −2x3 +3x4 ≤ 1 −x2 +x3 −x4 ≤ 2 −x1 −x2 +2x3 −x5 ≤ 4

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 44 / 49 = < so y4 = 0

The 1st and 3rd constraints are active.

Example

Which constraints are active atx ¯ = (1, 0, 2, 0)T ?

x1 +3x2 −2x3 +4x4 ≤ −3 = 4x2 −2x3 +3x4 ≤ 1 < so y2 = 0 −x2 +x3 −x4 ≤ 2 −x1 −x2 +2x3 −x5 ≤ 4

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 44 / 49 < so y4 = 0

The 1st and 3rd constraints are active.

Example

Which constraints are active atx ¯ = (1, 0, 2, 0)T ?

x1 +3x2 −2x3 +4x4 ≤ −3 = 4x2 −2x3 +3x4 ≤ 1 < so y2 = 0 −x2 +x3 −x4 ≤ 2 = −x1 −x2 +2x3 −x5 ≤ 4

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 44 / 49 The 1st and 3rd constraints are active.

Example

Which constraints are active atx ¯ = (1, 0, 2, 0)T ?

x1 +3x2 −2x3 +4x4 ≤ −3 = 4x2 −2x3 +3x4 ≤ 1 < so y2 = 0 −x2 +x3 −x4 ≤ 2 = −x1 −x2 +2x3 −x5 ≤ 4 < so y4 = 0

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 44 / 49 Example

Which constraints are active atx ¯ = (1, 0, 2, 0)T ?

x1 +3x2 −2x3 +4x4 ≤ −3 = 4x2 −2x3 +3x4 ≤ 1 < so y2 = 0 −x2 +x3 −x4 ≤ 2 = −x1 −x2 +2x3 −x5 ≤ 4 < so y4 = 0

The 1st and 3rd constraints are active.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 44 / 49 Example

Knowing y2 = y4 = 0 solve for y1 and y3 by writing the objective normal as a non-negative linear combination of the constraint outer normals.

      1 0 0 0 y1 1  3 −1 −1 0   y3   1      =   .  −2 1 0 0   r2   −1  4 −1 0 −1 r4 2

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 45 / 49 Example

Row reducing, we get y1 y3 r2 r4 1 0 0 0 1 3 −1 −1 0 1 −2 1 0 0 −1 4 −1 0 −1 2 . 1 0 0 0 1 0 1 1 0 2 0 1 0 0 1 0 1 0 1 2

T Therefore, y1 = 1 y3 = 1, r2 = 1, and r4 = 1. Hence,x ¯ = (1, 0, 2, 0) sopves the pimal andy ¯ = (1, 0, 1, 0)T solves the dual. We now double check to see if the vectory ¯ = (1, 0, 1, 0) does indeed solve the dual.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 46 / 49 dual slacks r1 = 0 r2 = 1 r3 = 0 r4 = 1

Example

Check thaty ¯ = (1, 0, 1, 0) solves the dual problem.

minimize −3y1 + y2 + 2y3 + 4y4 subject to y1 − y4 ≥ 1 3y1 + 4y2 − y3 − y4 ≥ 1 −2y1 − 2y2 + y3 + 2y4 ≥ −1 4y1 + 3y2 − y3 − y4 ≥ 2 0 ≤ y1, y2, y3, y4.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 47 / 49 Example

Check thaty ¯ = (1, 0, 1, 0) solves the dual problem.

minimize −3y1 + y2 + 2y3 + 4y4 dual slacks subject to y1 − y4 ≥ 1 r1 = 0 3y1 + 4y2 − y3 − y4 ≥ 1 r2 = 1 −2y1 − 2y2 + y3 + 2y4 ≥ −1 r3 = 0 4y1 + 3y2 − y3 − y4 ≥ 2 r4 = 1 0 ≤ y1, y2, y3, y4.

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 47 / 49 Example 2

Does x = (3, 1, 0)T solve P, where

 −1 3 −2   1   0  A =  1 −4 2  , c =  7  , b =  0  . 1 2 3 3 5

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 48 / 49 Example 3

Does x = (1, 2, 1, 0)T solve P, where

 3 1 4 2   −2   9   −3 2 2 1   0   3  A =   , c =   , b =   .  1 −2 3 0   5   0  −3 2 −1 4 2 1

Lecture 12: The Geometry of Linear Programming (Math Dept, UniversityMath 407A: of Washington)Linear Optimization 49 / 49