Optimization Methods: Linear Programming- Revised Method 1

Module – 3 Lecture Notes – 5

Revised Simplex Method, and Sensitivity analysis

Introduction

In the previous class, the simplex method was discussed where the simplex tableau at each iteration needs to be computed entirely. However, is an improvement over simplex method. Revised simplex method is computationally more efficient and accurate.

Duality of LP problem is a useful property that makes the problem easier in some cases and leads to dual simplex method. This is also helpful in sensitivity or post optimality analysis of decision variables.

In this lecture, revised simplex method, duality of LP, dual simplex method and sensitivity or post optimality analysis will be discussed.

Revised Simplex method

Benefit of revised simplex method is clearly comprehended in case of large LP problems. In simplex method the entire simplex tableau is updated while a small part of it is used. The revised simplex method uses exactly the same steps as those in simplex method. The only difference occurs in the details of computing the entering variables and departing variable as explained below.

Let us consider the following LP problem, with general notations, after transforming it to its standard form and incorporating all required slack, surplus and artificial variables. () +++ ++= ZcxcxcxcxZ11 2 2 33 LLL nn 0 () +++ + = xi cx11 1 cx 12 2 cx 13 3LLL cx 1nn b 1 ()x cx+++ cx cxLLL + cx = b j 21 1 22 2 23 3 2nn 2 MM M MM M () +++ + = xlmmmmnnmcx11 cx 2 2 cx 33 LLL cx b As the revised simplex method is mostly beneficial for large LP problems, it will be discussed in the context of notation. Matrix notation of above LP problem can be expressed as follows:

D Nagesh Kumar, IISc, Bangalore M3L5 Optimization Methods: Linear Programming- Revised Simplex Method 2

Minimize z = CT X subject to : AX = B with : X ≥ 0 b ⎡ 1 ⎤ Λ ⎡x1 ⎤ ⎡c1 ⎤ ⎢ ⎥ ⎡0⎤ ⎡c11 c12 c1n ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ x c ⎢b2 ⎥ 0 c c Λ c where X = ⎢ 2 ⎥ , C = ⎢ 2 ⎥ , B = ⎢ ⎥ , 0 = ⎢ ⎥ , A = ⎢ 21 22 2n ⎥ ⎢ Μ⎥ ⎢ Μ⎥ ⎢ M⎥ ⎢Μ⎥ ⎢ Μ Μ Ο Μ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ x c ⎢ ⎥ 0 c c Λ c ⎣ n ⎦ ⎣ n ⎦ ⎢ ⎥ ⎣ ⎦ ⎣ m1 m2 mn ⎦ ⎣bm ⎦ It can be noted for subsequent discussion that column vector corresponding to a decision

⎡c1k ⎤ ⎢c ⎥ variable x is ⎢ 2k ⎥ . k ⎢ Μ⎥ ⎢ ⎥ ⎣cmk ⎦

Let XS is the column vector of basic variables. Also let CS is the row vector of costs coefficients corresponding to XS and S is the basis matrix corresponding to XS .

1. Selection of entering variable For each of the nonbasic variables, calculate the coefficient ()WP − c , where, P is the corresponding column vector associated with the nonbasic variable at hand, c is the cost = −1 coefficient associated with that nonbasic variable and W CS S . For maximization (minimization) problem, nonbasic variable, having the lowest negative (highest positive) coefficient, as calculated above, is the entering variable.

2. Selection of departing variable a. A new column vector U is calculated as U = S −1B . b. Corresponding to the entering variable, another vector V is calculated as V = S −1P , where P is the column vector corresponding to entering variable. c. It may be noted that length of both U and V is same (= m ). For i = 1,Λ ,m, the U()i ratios, , are calculated provided V()i > 0 . i = , for which the ratio is least, is V()i noted. The r th basic variable of the current basis is the departing variable. If it is found that V()i ≤ 0 for all i , then further calculation is stopped concluding that bounded solution does not exist for the LP problem at hand.

D Nagesh Kumar, IISc, Bangalore M3L5 Optimization Methods: Linear Programming- Revised Simplex Method 3

3. Update to new basis

− = []−1 1 Old basis S , is updated to new basis S new , as S new ES η ⎡⎤10LL1 00 ⎢⎥ η ⎢⎥01LL2 00 ⎢⎥ ⎢⎥MMO ML MM ⎧V ()i ⎢⎥for i ≠ r ⎪V ()r where E = ⎢⎥MMLη L MM and η = r i ⎨ 1 ⎢⎥⎪ for i = r ⎢⎥MML MO MM ⎪V ()r ⎢⎥⎩ ⎢⎥00LLη 10 ⎢⎥m−1 ⎢⎥η ⎣⎦00LLm 01

r th column

S is replaced by S new and steps 1 through 3 are repeated. If all the coefficients calculated in step 1, i.e., ()WP − c is positive (negative) in case of maximization (minimization) problem, then optimum solution is reached and the optimal solution is, = −1 = XS S B and z CXS

Duality of LP problems

Each LP problem (called as Primal in this context) is associated with its counterpart known as Dual LP problem. Instead of primal, solving the dual LP problem is sometimes easier when a) the dual has fewer constraints than primal (time required for solving LP problems is directly affected by the number of constraints, i.e., number of iterations necessary to converge to an optimum solution which in Simplex method usually ranges from 1.5 to 3 times the number of structural constraints in the problem) and b) the dual involves maximization of an objective (it may be possible to avoid artificial variables that otherwise would be used in a primal minimization problem). The dual LP problem can be constructed by defining a new decision variable for each in the primal problem and a new constraint for each variable in the primal. The coefficients of the j th variable in the dual’s objective function is the i th component of the primal’s requirements vector (right hand side values of the constraints in the Primal). The dual’s requirements vector consists of coefficients of decision variables in the primal objective function. Coefficients of each constraint in the dual (i.e., row vectors) are the

D Nagesh Kumar, IISc, Bangalore M3L5 Optimization Methods: Linear Programming- Revised Simplex Method 4 column vectors associated with each decision variable in the coefficients matrix of the primal problem. In other words, the coefficients matrix of the dual is the transpose of the primal’s coefficient matrix. Finally, maximizing the primal problem is equivalent to minimizing the dual and their respective values will be exactly equal. When a primal constraint is less than equal to in equality, the corresponding variable in the dual is non-negative. And equality constraint in the primal problem means that the corresponding dual variable is unrestricted in sign. Obviously, dual’s dual is primal. In summary the following relationships exists between primal and dual.

Primal Dual

Maximization Minimization Minimization Maximization ith variable ith constraint j th constraint j th variable

Inequality sign of ith Constraint: ≥ xi 0 ≤ if dual is maximization ≥ if dual is minimization ith variable unrestricted ith constraint with = sign j th constraint with = sign j th variable unrestricted

Cost coefficient associated with j th RHS of j th constraint variable in the objective function Cost coefficient associated with ith variable in the objective RHS of ith constraint function

See the pictorial representation in the next page for better understanding and quick reference:

D Nagesh Kumar, IISc, Bangalore M3L5 Optimization Methods: Linear Programming- Revised Simplex Method 5

Mark the corresponding decision variables in the dual

Cost coefficients for Opposite for the Dual, the Objective Function i.e., Minimize

=+ + + Maximize Zcxcx11 2 2 LLL cxnn

++ +=⎯⎯→ Subject to cx11 1 cx 12 2LLL cx 1nn b 1 y 1 cx++ cxLLL +≤⎯⎯ cx b→ y 21 1 22 2 2nn 2 2 Thus the Objective Function, +++ MMMinimize by11 by 2 2 L bmm y +++≤⎯⎯→ cxmm11 cx 2 2 LLL cx mnnmm b y ≥≥ xx120, unrestricted,L , xn 0 Thus, the 1st constraint, Right hand cy +++≤ cyL c y c Coefficients Corresponding st 11 1 21 2mm 1 1 st st side of the 1 of the 1 sign of the 1 constraint constraint constraint is ≤

nd Coefficients Corresponding Right hand Thus, the 2 constraint, nd nd nd +++= of the 2 sign of the 2 side of the 2 cy12 1 cy 22 2L cmm 2 y c 2 constraint constraint is = constraint

M M M M

Determine the Determine the Determine the

sign of y1 sign of y2 LL sign of ym

Dual Problem

=+ + + Minimize Zbyby11 2 2 LLL bymm ++ + ≤ Subject to cy11 1 cy 21 2LLL cmm 1 y c 1 ++ + = cy12 1 cy 22 2LLL cmm 2 y c 2 MM +++≤ cy11nn cy 2 2 LLL c mnmn y c ≥≥ yyy12unrestricted, 0,L ,m 0

D Nagesh Kumar, IISc, Bangalore M3L5 Optimization Methods: Linear Programming- Revised Simplex Method 6

It may be noted that, before finding its dual, all the constraints should be transformed to ‘less- than-equal-to’ or ‘equal-to’ type for maximization problem and to ‘greater-than-equal-to’ or ‘equal-to’ type for minimization problem. It can be done by multiplying with −1 both sides of the constraints, so that sign gets reversed.

An example of finding dual problem is illustrated with the following example.

Primal Dual

= + ′ = − + Maximize Z 4x1 3x2 Minimize Z 6000y1 2000y2 4000y3 Subject to Subject to 2 y − y + y = 4 x + x ≤ 6000 1 2 3 1 3 2 2 + ≤ − ≥ y1 y2 3 x1 x2 2000 3 ≤ ≥ x1 4000 y1 0 ≥ x1 unrestricted y2 0 ≥ ≥ x2 0 y3 0

−+ ≤− It may be noted that second constraint in the primal is transformed to xx12 2000 before constructing the dual.

Primal-Dual relationships

Following points are important to be noted regarding primal-dual relationship:

1. If one problem (either primal or dual) has an optimal feasible solution, other problem also has an optimal feasible solution. The optimal objective function value is same for both primal and dual.

2. If one problem has no solution (infeasible), the other problem is either infeasible or unbounded.

3. If one problem is unbounded the other problem is infeasible.

D Nagesh Kumar, IISc, Bangalore M3L5 Optimization Methods: Linear Programming- Revised Simplex Method 7

Dual Simplex Method

Computationally, dual simplex method is same as simplex method. However, their approaches are different from each other. Simplex method starts with a nonoptimal but feasible solution where as dual simplex method starts with an optimal but infeasible solution. Simplex method maintains the feasibility during successive iterations where as dual simplex method maintains the optimality. Steps involved in the dual simplex method are:

1. All the constraints (except those with equality (=) sign) are modified to ‘less-than- equal-to’ ( ≤ ) sign. Constraints with greater-than-equal-to’ ( ≥ ) sign are multiplied by −1 through out so that inequality sign gets reversed. Finally, all these constraints are transformed to equality (=) sign by introducing required slack variables.

2. Modified problem, as in step one, is expressed in the form of a simplex tableau. If all the cost coefficients are positive (i.e., optimality condition is satisfied) and one or more basic variables have negative values (i.e., non-feasible solution), then dual simplex method is applicable.

3. Selection of exiting variable: The basic variable with the highest negative value is the exiting variable. If there are two candidates for exiting variable, any one is selected. The row of the selected exiting variable is marked as pivotal row.

4. Selection of entering variable: Cost coefficients, corresponding to all the negative elements of the pivotal row, are identified. Their ratios are calculated after changing

= ⎛ Cost Coefficients ⎞ the sign of the elements of pivotal row, i.e., ratio ⎜ ⎟ . ⎝ −1× Elements of pivotal row ⎠ The column corresponding to minimum ratio is identified as the pivotal column and associated decision variable is the entering variable.

5. Pivotal operation: Pivotal operation is exactly same as in the case of simplex method, considering the pivotal element as the element at the intersection of pivotal row and pivotal column.

6. Check for optimality: If all the basic variables have nonnegative values then the optimum solution is reached. Otherwise, Steps 3 to 5 are repeated until the optimum is reached.

D Nagesh Kumar, IISc, Bangalore M3L5 Optimization Methods: Linear Programming- Revised Simplex Method 8

Consider the following problem:

= + Minimize Z 2x1 x2 ≥ subject to x1 2 + ≤ 3x1 4x2 24 + ≥ 4x1 3x2 12 − + ≥ x1 2x2 1

By introducing the surplus variables, the problem is reformulated with equality constraints as follows:

= + Minimize Z 2x1 x2 − + = − subject to x1 x3 2 + + = 3x1 4x2 x4 24 − − + = − 4x1 3x2 x5 12 − + = − x1 2x2 x6 1

Expressing the problem in the tableau form:

Variables

Iteration Basis Z br x1 x2 x3 x4 x5 x6

1 Z 1 -2 -1 0 0 0 0 0

x3 0 -1 0 1 0 0 0 -2

x4 0 3 4 0 1 0 0 24

x5 0 -4 -3 0 0 1 0 -12

x6 0 1 -2 0 0 0 1 -1

Ratios Æ 0.5 1/3 -- -- 0 --

Pivotal Element Pivotal Row Pivotal Column

D Nagesh Kumar, IISc, Bangalore M3L5 Optimization Methods: Linear Programming- Revised Simplex Method 9

Tableaus for successive iterations are shown below. Pivotal Row, Pivotal Column and Pivotal Element for each tableau are marked as usual.

Z Variables Iteration Basis br x1 x2 x3 x4 x5 x6 Z 1 -2/3 0 0 0 -1/3 0 4

x3 0 -1 0 1 0 0 0 -2

2 x4 0 -7/3 0 0 1 4/3 0 8

x2 0 4/3 1 0 0 -1/3 0 4

x6 0 11/3 0 0 0 -2/3 1 7

Ratios Æ 2/3 ------

Variables Iteration Basis Z br x1 x2 x3 x4 x5 x6 Z 1 0 0 -2/3 0 -1/3 0 16/3

x1 0 1 0 -1 0 0 0 2

3 x4 0 0 0 -7/3 1 4/3 0 38/3

x2 0 0 1 4/3 0 -1/3 0 4/3

x6 0 0 0 11/3 0 -2/3 1 -1/3

Ratios Æ ------0.5 --

Variables Iteration Basis Z br x1 x2 x3 x4 x5 x6 Z 1 0 0 2.5 0 0 -0.5 5.5

x1 0 1 0 -1 0 0 0 2

4 x4 0 0 0 5 1 0 2 12

x2 0 0 1 -0.5 0 0 -0.5 1.5

x5 0 0 0 -5.5 0 1 -1.5 0.5

Ratios Æ

D Nagesh Kumar, IISc, Bangalore M3L5 Optimization Methods: Linear Programming- Revised Simplex Method 10

= As all the br are positive, optimum solution is reached. Thus, the optimal solution is Z 5.5 = = with x1 2 and x2 1.5 .

Solution of Dual from Final Simplex Tableau of Primal

Primal Dual Minimize Z'= 6y + 0y + 4y = − + 1 2 3 Maximize Z 4x1 x2 2x3 subject to 2y + y + 5y ≥ 4 subject to 2x + x + 2x ≤ 6 1 2 3 1 2 3 − − ≥ − − + ≤ y1 4y2 2y3 1 x1 4x2 2x3 0 + − ≥ − − ≤ 2y1 2y2 2y3 2 5x1 2x2 2x3 4 ≥ y , y , y ≥ 0 x1, x2 , x3 0 1 2 3 Final simplex tableau of primal:

y1

Z’ y2

y3

As illustrated above solution for the dual can be obtained corresponding to the coefficients of 1 slack variables of respective constraints in the primal, in the Z row as, y =1, y = and 1 2 3 1 y = and Z’=Z=22/3. 3 3

D Nagesh Kumar, IISc, Bangalore M3L5 Optimization Methods: Linear Programming- Revised Simplex Method 11

Sensitivity or post optimality analysis

A dual variable, associated with a constraint, indicates a change in Z value (optimum) for a small change in RHS of that constraint. Thus,

∆= ∆ Z ybji

th ∆ where y j is the dual variable associated with the i constraint, bi is the small change in the

th ∆ ∆ RHS of i constraint, and Z is the change in objective function owing to bi .

th +≤ Let, for a LP problem, i constraint be 250xx12 and the optimum value of the objective function be 250. What if the RHS of the ith constraint changes to 55, i.e., ith constraint +≤ th changes to 255xx12 ? To answer this question, let, dual variable associated with the i ∆= − = = constraint is y j , optimum value of which is 2.5 (say). Thus, bi 55 50 5 and y j 2.5 . ∆= ∆= ×= So, Zybji2.5 5 12.5 and revised optimum value of the objective function is

()250+= 12.5 262.5 .

∆ It may be noted that bi should be so chosen that it will not cause a change in the optimal basis.

D Nagesh Kumar, IISc, Bangalore M3L5