Linear Programming in Matrix Form
Total Page:16
File Type:pdf, Size:1020Kb
Linear Programming in Matrix Form Appendix B We first introduce matrix concepts in linear programming by developing a variation of the simplex method called the revised simplex method. This algorithm, which has become the basis of all commercial computer codes for linear programming, simply recognizes that much of the information calculated by the simplex method at each iteration, as described in Chapter 2, is not needed. Thus, efficiencies can be gained by computing only what is absolutely required. Then, having introduced the ideas of matrices, some of the material from Chapters 2,3, and 4 is recast in matrix terminology. Since matrices are basically a notational convenience, this reformulation provides essentially nothing new to the simplex method, the sensitivity analysis, or the duality theory. However, the economy of the matrix notation provides added insight by streamlining the previous material and, in the process, highlighting the fundamental ideas. Further, the notational convenience is such that extending some of the results of the previous chapters becomes more straightforward. B.1 A PREVIEW OF THE REVISED SIMPLEX METHOD The revised simplex method, or the simplex method with multipliers, as it is often referred to, is a modification of the simplex method that significantly reduces the total number of calculations that must be performed at each iteration of the algorithm. Essentially, the revised simplex method, rather than updating the entire tableau at each iteration, computes only those coefficients that are needed to identify the pivot element. Clearly, the reduced costs must be determined so that the entering variable can be chosen. However, the variable that leaves the basis is determined by the minimum-ratio rule, so that only the updated coefficients of the entering variable and the current righthand-side values are needed for this purpose. The revised simplex method then keeps track of only enough information to compute the reduced costs and the minimum-ratio rule at each iteration. The motivation for the revised simplex method is closely related to our discussion of simple sensitivity analysis in Section 3.1, and we will re-emphasize some of that here. In that discussion of sensitivity analysis, we used the shadow prices to help evaluate whether or not the contribution from engaging in a new activity was sufficient to justify diverting resources from the current optimal group of activities. The procedure was essentially to ‘‘price out’’ the new activity by determining the opportunity cost associated with introducing one unit of the new activity, and then comparing this value to the contribution generated by engaging in one unit of the activity. The opportunity cost was determined by valuing each resource consumed, by introducing one unit of the new activity, at the shadow price associated with that resource. The custom-molder example used in Chapter 3 to illustrate this point is reproduced in Tableau B.1. Activity 3, producing one hundred cases of champagne glasses, consumes 8 hours of production capacity and 10 hundred cubic feet of storage 11 1 space. The shadow prices, determined in Chapter 3, are $ 14 per hour of hour of production time and $ 35 per hundred cubic feet of storage capacity, measured in hundreds of dollars. The resulting opportunity cost of 505 506 Linear Programming in Matrix Form B.1 Tableau B.1 Basic Current variables values x x x x x x z 1 2 3 4 5 6 − x4 60 6 5 8 1 0 x5 150 10 20 10 1 0 x6 8 1 0 0 1 0 ( z) 0 5 4.5 6 1 − diverting resources to produce champagne glasses is then: 11 8 1 10 46 6 4 . 14 + 35 = 7 = 7 ⇣ ⌘ ⇣ ⌘ 4 Comparing this opportunity cost with the $6 contribution results in a net loss of $ 7 per case, or a loss of 1 $57 7 per one hundred cases. It would clearly not be advantageous to divert resources from the current basic solution to the new activity. If, on the other hand, the activity had priced out positively, then bringing the new activity into the current basic solution would appear worthwhile. The essential point is that, by using the shadow prices and the original data, it is possible to decide, without elaborate calculations, whether or not a new activity is a promising candidate to enter the basis. It should be quite clear that this procedure of pricing out a new activity is not restricted to knowing in advance whether the activity will be promising or not. The pricing-out mechanism, therefore, could in fact be used at each iteration of the simplex method to compute the reduced costs and choose the variable to enter the basis. To do this, we need to define shadow prices at each iteration. In transforming the initial system of equations into another system of equations at some iteration, we maintain the canonical form at all times. As a result, the objective-function coefficients of the variables that are currently basic are zero at each iteration. We can therefore define simplex multipliers, which are essentially the shadow prices associated with a particular basic solution, as follows: Definition. The simplex multipliers (y1, y2,...,ym) associated with a particular basic solution are the multiples of their initial system of equations such that, when all of these equations are multiplied by their respective simplex multipliers and subtracted from the initial objective function, the coefficients of the basic variables are zero. Thus the basic variables must satisfy the following system of equations: y a y a y a c for j basic. 1 1 j + 2 2 j +···+ m mj = j The implication is that the reduced costs associated with the nonbasic variables are then given by: c c (y a y a y a ) for j nonbasic. j = j − 1 1 j + 2 2 j +···+ m mj If we then performed the simplex method in such a way that we knew the simplex multipliers, it would be straightforward to find the largest reduced cost by pricing out all of the nonbasic variables and comparing them. In the discussion in Chapter 3 we showed that the shadow prices were readily available from the final system of equations. In essence, since varying the righthand side value of a particular constraint is similar to adjusting the slack variable, it was argued that the shadow prices are the negative of the objective-function coefficients of the slack (or artificial) variables in the final system of equations. Similarly, the simplex multipliers at each intermediate iteration are the negative of the objective-function coefficients of these variables. In transforming the initial system of equations into any intermediate system of equations, a number of iterations of the simplex method are performed, involving subtracting multiples of a row from the objective function at each iteration. The simplex multipliers, or shadow prices in the final tableau, then B.1 A Preview of the Revised Simplex Method 507 Tableau B.2 Basic Current variables values x4 x5 x6 x 4 2 1 3 2 7 − 7 35 x 1 4 2 1 1 6 7 − 7 14 x 6 3 2 1 1 7 7 − 14 ( z) 51 3 11 1 − − 7 − 14 − 35 reflect a summary of all of the operations that were performed on the objective function during this process. If we then keep track of the coefficients of the slack (or artificial) variables in the objective function at each iteration, we immediately have the necessary simplex multipliers to determine the reduced costs c j of the nonbasic variables as indicated above. Finding the variable to introduce into the basis, say xs, is then easily accomplished by choosing the maximum of these reduced costs. The next step in the simplex method is to determine the variable xr to drop from the basis by applying the minimum-ratio rule as follows: br bi Min ais > 0 . a = i a rs ( is ) Hence, in order to determine which variable to drop from the basis, we need both the current righthand-side values and the current coefficients, in each equation, of the variable we are considering introducing into the basis. It would, of course, be easy to keep track of the current righthand-side values for each iteration, since this comprises only a single column. However, if we are to significantly reduce the number of computations performed in carrying out the simplex method, we cannot keep track of the coefficients of each variable in each equation on every iteration. In fact, the only coefficients we need to carry out the simplex method are those of the variable to be introduced into the basis at the current iteration. If we could find a way to generate these coefficients after we knew which variable would enter the basis, we would have a genuine economy of computation over the standard simplex method. It turns out, of course, that there is a way to do exactly this. In determining the new reduced costs for an iteration, we used only the initial data and the simplex multipliers, and further, the simplex multipliers were the negative of the coefficients of the slack (or artificial) variables in the objective function at that iteration. In essence, the coefficients of these variables summarize all of the operations performed on the objective function. Since we began our calculations with the problem in canonical form with respect to the slack (or artificial) variables and z in the objective function, it would seem intuitively appealing that the coefficients of these variables in any− equation summarize all of the operations performed on that equation. To illustrate this observation, suppose that we are given only the part of the final tableau that corresponds to the slack variables for our custom-molder example.