<<

Introduction Integer Programming

Integer Linear Programming

Subhas C. Nandy ([email protected])

Advanced Computing and Microelectronics Unit Indian Statistical Institute Kolkata 700108, India. Introduction Linear Programming Integer Programming Organization

1 Introduction

2 Linear Programming

3 Integer Programming Introduction Linear Programming Integer Programming Linear Programming

A technique for optimizing a linear objective function, subject to a set of linear equality and linear inequality constraints.

Mathematically,

maximize c1x1 + c2x2 + ... + cnxn Subject to: a11x1 + a12x2 + ... + a1nxn b1 ≤ a21x1 + a22x2 + ... + a2nxn b2 : ≤ : am1x1 + am2x2 + ... + amnxn bm ≤ xi 0 for all i = 1, 2,..., n. ≥ Introduction Linear Programming Integer Programming Linear Programming

In matrix notation, maximize C T X Subject to: AX B X ≤0 where C is a n≥ 1 vector—- cost vector, A is a m× n matrix—- coefficient matrix, B is a m × 1 vector—- requirement vector, and X is an n× 1 vector of unknowns. × He developed it during World War II as a way to plan expenditures and returns so as to reduce costs to the army and increase losses incurred by the enemy. The method was kept secret until 1947 when George B. Dantzig published the simplex method and John von Neumann developed the theory of duality as a linear optimization solution. Dantzig’s original example was to find the best assignment of 70 people to 70 jobs subject to constraints. The computing power required to test all the permutations to select the best assignment is vast. However, the theory behind linear programming drastically reduces the number of feasible solutions that must be checked for optimality.

Introduction Linear Programming Integer Programming History

The linear programming method was first developed by Leonid Kantorovich in 1937. The method was kept secret until 1947 when George B. Dantzig published the simplex method and John von Neumann developed the theory of duality as a linear optimization solution. Dantzig’s original example was to find the best assignment of 70 people to 70 jobs subject to constraints. The computing power required to test all the permutations to select the best assignment is vast. However, the theory behind linear programming drastically reduces the number of feasible solutions that must be checked for optimality.

Introduction Linear Programming Integer Programming History

The linear programming method was first developed by Leonid Kantorovich in 1937. He developed it during World War II as a way to plan expenditures and returns so as to reduce costs to the army and increase losses incurred by the enemy. Dantzig’s original example was to find the best assignment of 70 people to 70 jobs subject to constraints. The computing power required to test all the permutations to select the best assignment is vast. However, the theory behind linear programming drastically reduces the number of feasible solutions that must be checked for optimality.

Introduction Linear Programming Integer Programming History

The linear programming method was first developed by Leonid Kantorovich in 1937. He developed it during World War II as a way to plan expenditures and returns so as to reduce costs to the army and increase losses incurred by the enemy. The method was kept secret until 1947 when George B. Dantzig published the simplex method and John von Neumann developed the theory of duality as a linear optimization solution. The computing power required to test all the permutations to select the best assignment is vast. However, the theory behind linear programming drastically reduces the number of feasible solutions that must be checked for optimality.

Introduction Linear Programming Integer Programming History

The linear programming method was first developed by Leonid Kantorovich in 1937. He developed it during World War II as a way to plan expenditures and returns so as to reduce costs to the army and increase losses incurred by the enemy. The method was kept secret until 1947 when George B. Dantzig published the simplex method and John von Neumann developed the theory of duality as a linear optimization solution. Dantzig’s original example was to find the best assignment of 70 people to 70 jobs subject to constraints. However, the theory behind linear programming drastically reduces the number of feasible solutions that must be checked for optimality.

Introduction Linear Programming Integer Programming History

The linear programming method was first developed by Leonid Kantorovich in 1937. He developed it during World War II as a way to plan expenditures and returns so as to reduce costs to the army and increase losses incurred by the enemy. The method was kept secret until 1947 when George B. Dantzig published the simplex method and John von Neumann developed the theory of duality as a linear optimization solution. Dantzig’s original example was to find the best assignment of 70 people to 70 jobs subject to constraints. The computing power required to test all the permutations to select the best assignment is vast. Introduction Linear Programming Integer Programming History

The linear programming method was first developed by Leonid Kantorovich in 1937. He developed it during World War II as a way to plan expenditures and returns so as to reduce costs to the army and increase losses incurred by the enemy. The method was kept secret until 1947 when George B. Dantzig published the simplex method and John von Neumann developed the theory of duality as a linear optimization solution. Dantzig’s original example was to find the best assignment of 70 people to 70 jobs subject to constraints. The computing power required to test all the permutations to select the best assignment is vast. However, the theory behind linear programming drastically reduces the number of feasible solutions that must be checked for optimality. Introduction Linear Programming Integer Programming History

Linear-programming problem was first shown to be solvable in polynomial time by Leonid Khachiyan in 1979. Major breakthrough - Narendra Karmarkar’s method for solving LP (1984) using interior point method. Introduction Linear Programming Integer Programming An Example

A farmer has a piece of farm land of area α square kilometers. Wanted to plant wheat and barley. The farmer has β unit of fertilizer and γ unit of pesticides in hand.

(f1, p1): The amount of fertilizer and pesticides needed for wheat per square kilometer.

(f2, p2): The amount of fertilizer and pesticides needed for barley per square kilometer.

Profit of wheat and barley per square kilometer is c1 and c2

The objective:

To decide x1, x2, the area where wheat and barley needs to be planted. In matrix form: Maximize    x1 c1 c2 x2 Subject to     1 1   α x1  f1 f2   β  , and x1, x2 0. x2 ≤ ≥ p1 p2 γ

Introduction Linear Programming Integer Programming The Problem Constraints: Objective function: f1x1 + f2x2 β ≤ c1x1 + c2x2 p1x1 + p2x2 γ ≤ x1 + x2 α, and x1, x2 0. ≤ ≥ Introduction Linear Programming Integer Programming The Problem Constraints: Objective function: f1x1 + f2x2 β ≤ c1x1 + c2x2 p1x1 + p2x2 γ ≤ x1 + x2 α, and x1, x2 0. ≤ ≥

In matrix form: Maximize    x1 c1 c2 x2 Subject to     1 1   α x1  f1 f2   β  , and x1, x2 0. x2 ≤ ≥ p1 p2 γ Introduction Linear Programming Integer Programming Linear Integer Programming

The Problem

Objective Function: Maximize 8x1 + 11x2 + 6x3 + 4x4 Subject to: 5x1 + 7x2 + 4x3 + 3x4 14 ≤ 7x1 + 2x3 + x4 10 + ≤ xi Z for all i = 1, 2, 3, 4 ∈ Introduction Linear Programming Integer Programming Linear Integer Programming

Types of integer programming problems Pure Integer Programming Problem: All variables are required to be integer. Mixed Integer Programming Problem: Some variables are restricted to be integers; the others can take any value. Binary Integer Programming Problem: All variables are restricted to be 0 or 1. Introduction Linear Programming Integer Programming Linear Integer Programming

Linear Relaxation

Objective Function: Maximize z = 8x1 + 11x2 + 6x3 + 4x4 Subject to: 5x1 + 7x2 + 4x3 + 3x4 14 ≤ xi 0, 1 for all i = 1, 2, 3, 4 Relax it to: ∈ { }

Objective Function: Maximize z = 8x1 + 11x2 + 6x3 + 4x4 Subject to: 5x1 + 7x2 + 4x3 + 3x4 14 ≤ xi [0, 1] for all i = 1, 2, 3, 4 ∈ Non-optimality

Integrality relaxation: x1 = x2 = 1, x3 = 0.5, x4 = 0, and z = 22.

Rounding x3 = 0: gives z = 19.

Rounding x3 = 1: Infeasible.

Optimal integer solution: x1 = 0, x2 = x3 = x4 = 1, and z = 21. Introduction Linear Programming Integer Programming Converting finite valued integer variables to binary

Assume that xj can assume values in p1, p2,..., pk . { } To convert it to binary integer programming 1 2 k Introduce variables yj , yj ,... yj 0, 1 , and 1 2 ∈ { } k Substitute xj with: p1yj + p2yj + ... + pk yj in objective function and all the constraints, and 1 2 p Introduce a new constraint: yj + yj + ... + yj = 1 Introduction Linear Programming Integer Programming Converting finite valued integer variables to binary

Example

Objective Function: Maximize z = 8x1 + 11x2 + 6x3 + 4x4 Subject to: 5x1 + 7x2 + 4x3 + 3x4 14 ≤ x1 1, 2, 3 and xj 0, 1 for all i = 2, 3, 4 ∈ { } ∈ { }

1 2 3 Converted Problem: substituting x1 = 1y1 + 2y1 + 3y1 1 2 3 Objective Function:Max z = 8y1 + 16y1 + 24y1 + 11x2 + 6x3 + 4x4 1 2 3 Subject to: 5y1 + 10y1 + 15y1 + 7x2 + 4x3 + 3x4 14 1 2 3 ≤ y1 + y1 + y1 = 1 1 2 3 y , y , y 0, 1 and xj 0, 1 for all i = 2, 3, 4 1 1 1 ∈ { } ∈ { } 1 2 3 Solution: y1 = 0, y1 = 1, y1 = 0, x2 = 0, x3 = 1, x4 = 0, and z = 22. Implying: x1 = 2, x2 = 0, x3 = 1, x4 = 0, and z = 22. Introduction Linear Programming Integer Programming Applications of ILP

Vertex Cover Problem Given a graph G = (V , E), find a subset of vertices U V such that each edge in E is incident to at least one vertex in⊆U.

ILP Formulation:

Variables: x1, x2,... xn corresponding to each vertex in V . { P}n Objective function: Min i=1 xi . Constraints: For each member eij = (vi , vj ) E, xi + xj 1. ∈ ≥ xi 0, 1 for all i = 1, 2,..., n. ∈ { } Introduction Linear Programming Integer Programming Applications of ILP

Set Cover Problem Given a set of elements U = 1, 2,..., M , and a set S of subsets { } S1, S2,..., Sk with the elements of U. Choose{ the minimum} number of subsets such that their union is U Formulation:

Variables: x1, x2,... xk corresponding to S1, S2,..., Sk . { P}k { } Objective function: Min i=1 xi . P Constraints: For each member uj U, xi 1. ∈ uj ∈Si ≥ xi 0, 1 for all i = 1, 2,..., k. ∈ { } Introduction Linear Programming Integer Programming Applications of ILP

Maximum Clique Problem A clique in a graph G = (V , E) is a subset C V such that the subgraph of G induced by those vertices is a complete⊆ graph. Objective is to choose the largest clique in G

Formulation:

Variables: x1, x2,... xn corresponding to every member of V . { P}n Objective function: Max i=1 xi . Constraints: xi + xj 1 for all (vi , vj ) E and i < j. ≤ 6∈ xi 0, 1 for all i = 1, 2,..., n. ∈ { } Introduction Linear Programming Integer Programming Applications of ILP

Maximum Independent Set Problem An independent set in a graph G = (V , E) is a subset C V such that there is no edge among any pair of vertices in C. ⊆ Objective is to choose the largest independent set in G

Formulation:

Variables: x1, x2,... xn corresponding to every member of V . { P}n Objective function: Max i=1 xi . Constraints: xi + xj 1 for all (vi , vj ) E and i < j. ≤ ∈ xi 0, 1 for all i = 1, 2,..., n. ∈ { } Introduction Linear Programming Integer Programming Computational Status of ILP

Computational Status of ILP - NP-Hard

Proof: ILP NP ∈ Consider a 0-1 ILP, where each variable x1, x2,..., xn can assume values 0 or 1. The number of constraints is m. Example:

Objective Function: Maximize x1 − x2 + x3 + x4 Subject to: x1 + x2 − x3 + x4 ≥ 0 x1 + x3 − x4 ≥ 0 + xi ∈ Z for all i = 1, 2, 3, 4

n We can choose all possible 2 assignments of x1, x2,..., xn in non-deterministic manner. Checking the feasibility of each assignment takes O(nm) time, and Computing the value of the objective function for each feasible assignment takes O(n) time. Thus, we have a non-deterministic polynomial time for the ILP. Introduction Linear Programming Integer Programming Computational Status of ILP

ILP is NP-hard (Reduction from 3-SAT)

Given a 3-SAT expression with variables x1, x2,..., xn Example: (x1 x2 x3) (x2 x4 x5) ∨ ∨ ∧ ∨ ∨ Create an ILP instance with variables z1, z2,..., zn, where Each clause is converted as an inequation as follows: z1 + (1 z2) + (1 z3) > 0 − − z2 + (1 z4) + z5 > 0 − zi 0, 1 for all i = 1, 2,..., 5 Now,∈ { a feasible} solution of this ILP indicates a satisfying truth assignent of the 3-SAT.

Conclusion: ILP is NP-hard since 3-SAT is NP-complete. Introduction Linear Programming Integer Programming An important property of ILP

Important Result An m n matrix is said to be unimodular if for every submatrix× of size m m the value of its determinant is 1, 0, or 1. × − A matrix is said to be totally unimodular (TUM) if its all possible square submatrices are unimodular. If the coefficient matrix of an integer linear program is a TUM, then it is polynomially solvable. Introduction Linear Programming Integer Programming Polynomial Solvability of Unimodular ILP

Result If the constraint matrix A in LP is totally unimodular and b is integer valued, then every vertex of the is integer valued.

Proof

Let Am×n be unimodular, and m < n. −1 Let Bm×m be a basis, and xB = B b be a basic solution. −1 C T By Cramer’s rule, B = det(B) , where C = ((cij )) is the cofactor matrix of B.

 b1,1 ... b1,j−1 X b1,j+1 ... b1,m   ......   ......     bi−1,1 ... bi−1,j−1 X bi−1,j+1 ... bi−1,m  i+j   cij = (−1) det  XXXXXXX     bi+1,1 ... bi+1,j−1 X bi+1,j+1 ... bi+1,m     ......   ......  bm,1 ... bm,j−1 X bm,j+1 ... bm,m Introduction Linear Programming Integer Programming

Thus, Each element of the matrix B−1 is integer. Now, if the elements of the requirement vector b are integer, the basic feasible solution corresponding to the basis B is integer valued. The same is true for all possible basis of the matrix A.

Conclusion: Thus, if the coefficient matrix of an ILP is a TUM, then it can be solved in polynomial time. Introduction Linear Programming Integer Programming Total unimodularity and bipartite graphs

Result A graph is bipartite if and only if its incidence matrix is totally unimodular. Proof: If part: Let A be a TUM. Assume that G is not bipartite. Then G contains an odd cycle. The submatrix of A which corresponds to the odd cycle has determinant 2. This contradicts the total unimodularity of the matrix A. Only if part: Let G be bipartite. Consider a t t submatrix of A. Proof is by induction on t.( t =× 1 follows from the definition of the incidence matrix.) Introduction Linear Programming Integer Programming Konig Matching Theorem

Konig Matching Theorem Let G be a bipartite graph. Cardinality of maximum matching = cardinality of minimum ( ν(G) = τ(G)).

Proof: We know that the matching number is given by ν(G) = max{1T x|Ax ≤ 1, x ≥ 0}, where A −→ vertex edge incidence matrix, and x corresponds to the edges in G.

Its dual problem: d ∗ = min{1T y|y ≥ 0, AT y ≥ 1}, where y corresponds to the vertices in G.

By LP duality, d ∗ = ν(G). Let y be the incidence vector of the minimum vertex cover. Then y is feasible in the dual problem with value of the objective function τ(G). Suppose y is a non-optimal solution of the dual problem. Then there is a (0-1) optimum solution y ∗ with d ∗ < τ(G). But, this (0-1) solution y ∗ is also a feasible solution of the vertex cover problem. Indication y is not optimum for vertex cover problem= ⇒ Contradiction. Introduction Linear Programming Integer Programming

Conclusion: Vertex cover problem is polynomially solvable for bipartite graph. Introduction Linear Programming Integer Programming Incidence matrix of a directed graph

Let G = (V , E) be a directed graph

Define a V E incidence matrix A = ((ai,j )) as follows: | | × | |

av,e = 1 if e leaves v = 1 if e enters v (1) − = 0 otherwise

Note: Every column of A has exactly one 1 and one 1, the other elements are all zeroes. − Introduction Linear Programming Integer Programming Total unimodularity and directed graphs

Result The incidence matrix A of a directed graph G is totally unimodular.

Proof: Let B be a t t square submatrix of A. Proof by induction on t. × Case 0: t = 1. This case is trivial. Case 1: B has a zero column = det(B) = 0. ⇒ Case 2: B has a column with exactly one 1 (or -1). Calculate det(B) using this column and use the induction assumption. det(B) = 1 or 1 or 0 − Case 3: Every column of B has one 1 and one 1. The row vectors of B add up to the zero vector = det(B) = 0. ⇒ Introduction Linear Programming Integer Programming Transportation Problem

The Problem Given

a set of m sources s1, s2,..., sm, and a set of destinations d1, d2,..., dn,

the costs cij of transporting from source si to destination dj ,

the supply ai at source si for all i = 1, 2,..., m,

the requirement bj at destination dj for all j = 1, 2,... n The objective is to solve the LP

Pm Pn Minimize z = i=1 j=1 xij cij

Pn Subject to j=1 xij ≤ ai ∀ i = 1, 2,... m Pm i=1 xij ≥ bj ∀ j = 1, 2,... n xij ≥ 0 ∀ i = 1(1)m, j = 1(1)n Introduction Linear Programming Integer Programming Transportation Problem

Balanced Transportation Pm Pn Here, i=1 ai = j=1 bj , and the constraints are eqalities.

Example of a balanced transportation problem Consider the following problem with 2 factories and 3 warehouses: wirehouse 1 wirehouse 2 wirehouse 3 supply Factory 1 c11 = 2 c12 = 5 c13 = 7 20 Factory 2 c21 = 6 c22 = 4 c23 = 4 10 Demand 7 10 13

This is an LP, and can be solved in polynomial time. Introduction Linear Programming Integer Programming Assignment Problem

Assignment Problem

Here, the cost of doing a job j by a machine i is cij ≥ 0. If cij = ∞, then machine i can not be used for job job j. m = n

ai = 1 for all i = 1, 2,..., m, and

bj = 1 for all j = 1, 2,..., n.

xij = 0 or 1

This is an ILP, where the constraints can be written as

Pn j=1 xij = 1 ∀ i = 1, 2,... n Pm − i=1 xij = −1 ∀ j = 1, 2,... n xij = 0 or 1 ∀ i = 1(1)n, j = 1(1)n

and the objective function remains same as the transportation problem

Pm Pn Minimize z = i=1 j=1 xij cij Introduction Linear Programming Integer Programming Min-Cost Bipartite Matching

s1 ...... s2 s3 sn 1 sn −

d1 ...... d2 d3 dn 1 dn −

Since, each column of the coefficient matrix has an 1 and a -1, the matrix is unimodular. Thus, the assignment problem can be solved in polynomial time. Introduction Linear Programming Integer Programming Problems on Interval Graph

An interval matrix has01 entries and each row is of the form (0,..., 0, 1,...... , 1, 0,..., 0)

Result Each interval matrix A is totally unimodular.

Proof: Let B be a t t submatrix of A, and let. ×  1 1 0 ... 0 0  −  0 1 1 ... 0 0   −   . . . . .  N =  ......     0 0 0 ... 1 1  − 0 0 0 ... 0 1 Note that, det(N) = 1. Now, NBT is a submatrix of the incidence matrix of some directed graph. Thus, NBT is totally unimodular, and hence det(B) 0, +1, 1 . ∈ { − } Introduction Linear Programming Integer Programming Clique of Interval Graph

Maximum Clique Problem A clique in a graph G = (V , E) is a subset C V such that the subgraph of G induced by those vertices is a complete⊆ graph. Objective is to choose the largest clique in G

Formulation:

Variables: x1, x2,... xn corresponding to every member of V . { P}n Objective function: Max i=1 xi . Constraints: xi + xj 1 for all (vi , vj ) E and i < j. ≤ 6∈ xi 0, 1 for all i = 1, 2,..., n. ∈ { } Introduction Linear Programming Integer Programming Clique of Interval Graph

A D B D F C G B A E F G C E The corresponding matrix  1 1 1 0 0 0   1 0 0 0 0     1 1 1 0     1 1 0     0 0  1

Observation As the consecutive 1 property is satisfied by the coefficient matrix, the clique problem for interval graph can be solved in polynomial time. Introduction Linear Programming Integer Programming Clique Cover Problem for Interval Graph

I3 I6 I11 I4 I8 I2 I12 I I10 I5 9 I1 I7

x1x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 The corresponding matrix

I1 I2 I3 I4 I5 I6 I7 I8 I9 I10 I 11 I12 x1 1110000000 0 0 x2 0110000000 0 0 x3 0011100000 0 0 x4 0001110000 0 0 x5 0000111100 0 0 x6 0000011110 0 0 x7 0000001110 0 0 x8 0000001101 0 0 x9 0000001001 1 0 x10 0000000000 1 1 x11 0000000000 0 1 Introduction Linear Programming Integer Programming Clique Cover Problem for Interval Graph

Minimum Clique Cover Pn P Minimize xi subject to xi 1 for all intervals Ii . i=1 i∈Ii ≥

Time complexity Thus, the minimum clique cover corresponds to minimum number of vertical line that stab all the segments.

As the coefficient matrix satisfies consecutive 1 property, and can be solved in polynomial time. Introduction Linear Programming Integer Programming Solving ILP - Cutting plane method

ILP Problem Corresponding LP Problem Minimize c0x Minimize c0x Subject to Ax = b Subject to Ax = b x 0, x integer. x 0. ≥ ≥

x∗ is a basic feasible x2 solution of the LP. Feasible set The greedy strategy is that choose the nearest x0, the ILP solution integer of x∗ as the Decreasing cost solution of ILP. But, such a solution may x∗ x not be feasible. 1 Introduction Linear Programming Integer Programming Solving ILP - Cutting Plane method

Theme of the Algorithm Step 1: Solve the LP relaxation of the ILP problem. Step 2: If the solution is not an integer then add a new constraint, called a cutting plane, that does not exclude any feasible integer solution of the LP problem. Step 3: Go to Step 1 Introduction Linear Programming Integer Programming Getting a Cutting Plane - Gomory’s technique

Recall the simplex tableau c1 c2 ...... cj ...... cn xB cB y0 y1 y2 ...... yj ...... yn

xB1 cB1 y10 y11 y12 ...... y1j ...... y1n

xB2 cB2 y20 y21 y22 ...... y2j ...... y2n ......

xBi cBi yi0 yi1 yi2 ...... yij ...... yin ......

xBm cBm ym0 ym1 ym2 ...... ymj ...... ymn Consider a typical equation (for some i [0, 1,..., m]) of the tableau ∈ X x + yij xj = yi0 ...... ( ) B(i) ∗ j6∈B Introduction Linear Programming Integer Programming Getting a Cutting Plane - Gomory’s technique

Consider a typical equation (for some i [0, 1,..., m]) of the tableau ∈ X x + yij xj = yi0 ...... ( ) B(i) ∗ j6∈B We also have X X yij xj yij xj b c ≤ j∈B j∈B Thus, we have X x + yij xj yi0 B(i) b c ≤ j6∈B Since the LHS is an integer, we can replace the constraint as X x + yij xj yi0 ...... ( ) B(i) b c ≤ b c ∗ j6∈B Subtracting (**) from (*), we have X (yij yij )xj yi0 yi0 − b c ≥ − b c j6∈B Introduction Linear Programming Integer Programming Getting a Cutting Plane - Gomory’s technique

We have, X (yij yij )xj yi0 yi0 − b c ≥ − b c j6∈B

Taking fij = yij yij for all i = 0, 1,..., m, we have − b c X fij xj fi0 ≥ j6∈B Adding slack variable X fij xj + s = fi0 − − j6∈B Introduction Linear Programming Integer Programming Example

x2 3

ILP Problem Max x2 2 Maximize x2 First optimum Subject to 3x1 + 2x2 6 ≤ 1 3x1 + 2x2 0 − ≤ x1, x2 0, x integer. 0 ≥ 1 2 3 x1

Initial Tableau Final Tableau

xb cb y0 y1 y2 y3 y4 xb cb y0 y1 y2 y3 y4 cj - - 0 1 0 0 cj - - 0 1 0 0 1 1 x3 0 6 3 2 1 0 x1 0 1 1 0 6 − 6 3 1 1 x4 0 0 -3 2 0 1 x2 1 2 0 1 4 4 3 1 1 zj − cj - 0 0 -1 0 0 zj − cj - 2 0 0 4 4

3 Solution of the LP: x = (1, 2 ) 3 Cost: x2 = 2 Introduction Linear Programming Integer Programming Example

The second row of the tableau says that 1 1 3 x + x + x = 2 4 3 4 4 2

Substituting the maximum integeral solution (i.e., x2 = 1), we have 1 1 1 x3 + x4 4 4 ≥ 2

x2 3

Max x2 2 Second First optimum Implies a new constraint optima ...... x2 1. 1 ≤ First cut 0 1 2 3 x1 Introduction Linear Programming Integer Programming Another Iteration

Initial Tableau Final Tableau

xb cb y0 y1 y2 y3 y4 s1 xb cb y0 y1 y2 y3 y4 s1 cj - - 0 1 0 0 0 cj - - 0 1 0 0 0 1 1 2 1 2 x1 0 1 1 0 6 − 6 0 x1 0 3 1 0 0 − 3 3 3 1 1 x2 1 2 0 1 4 4 0 x2 1 1 0 1 0 0 1 1 1 1 xs1 0 − 2 0 0 − 4 − 4 1 x3 0 2 0 0 1 1 -4 3 1 1 zj − cj - 2 0 0 4 4 z0j − cj - 1 0 0 0 0 1

Obtained solution x = ( 2 , 1) still not an integer solution. 3 −→ 2 2 2 The first row of the tableau says that x1 + ( 3 1)x4 + 3 xs1 = 3 2 − Implying x1 x4 − ≤ 3 Since x1, x4 are both integers, we have x1 x4 0 2 2 2 − ≤ In other words, x4 + xs1 3 3 ≥ 3 Combining Eq. 2, we have another constraint x1 x2 ≥ Introduction Linear Programming Integer Programming Explanation of last step

We have the followings: 2 2 2 1. 3 x4 + 3 xs1 3 = x4 + xs1 1. 1 1 ≥1 ⇒1 1 ≥ 1 2. x3 + x4 = x3 + x4 + xs1 = 4 4 ≥ 2 ⇒ 4 4 2 3.3 x1 + 2x2 + x3 = 6

4. 3x1 + 2x2 + x4 = 0 − 1 1 1 Substituting xs1 from (2) in (1), we have x4 + ( + x3 + x4) 1 2 4 4 ≥ Implying, 5x4 + x3 6 ≥ Putting x3 and x4 from (3) and (4) respectively, we have x1 x2. ≥ Introduction Linear Programming Integer Programming Example

Combining Eq. 2, we have another constraint x1 x2 ≥

x2 3

Max x2 2 Second optima First optimum

1

Third optima First cut Second cut 0 1 2 3 x1

Finally, we have the solution: x1 = 1 and x2 = 1. The number of iterations is finite, but may be exponential in the number of variables and constraints.

Introduction Linear Programming Integer Programming

Questions !!! Introduction Linear Programming Integer Programming

Questions !!!

The number of iterations is finite, but may be exponential in the number of variables and constraints. Introduction Linear Programming Integer Programming An information

Result Eisenbrand and Rote, 2001 −→ A two variable integer programming defined by m linear constraints where each coefficient is represented using s bits can be solved in O(m + s log m) arithmetic , or O(m + log s log m)M(s) bit operations, where M(s) is the time needed for multiplying two integers of s bits.