Contents 1. the Simplex Method 1 1.1. Lecture 1: Introduction

Total Page:16

File Type:pdf, Size:1020Kb

Contents 1. the Simplex Method 1 1.1. Lecture 1: Introduction Contents 1. The Simplex Method 1 1.1. Lecture 1: Introduction. 1 1.2. Lecture 2: Notation, Background, History. 3 1.3. Lecture 3: The simplex method 7 1.4. Lecture 4: An example. 10 1.5. Lecture 5: Complications 13 1.6. Lecture 6: 2-phase method and Big-M 15 1.7. Lecture 7: Dealing with a general LP 18 1.8. Lecture 8: Duality 22 1.9. Lecture 9: more duality 24 1.10. Lecture 10: Assorted facts on duality 26 1.11. Lecture 11: A problem for duality 29 1.12. Lecture 12: Post-optimal considerations and general Duality 31 1.13. Lecture 13: The Revised Simplex 34 1.14. Lecture 14: Filling the knapsack 38 1.15. Lecture 15: Knapsack, day 2 40 1.16. Lecture 16: Knapsack, day 3 43 1.17. Lecture 17: A branch and bound example in class 50 1.18. Lecture 18: Matrix games 51 1.19. Lecture 19: Mixed strategies 53 1.20. Lecture 20, 21: a game example 59 1.21. Lecture 22: Review for the test 62 1.22. Lecture 23: Midterm 64 1.23. Lecture 24, 25: Network Simplex, Transshipment (Chapter 19) 65 1.24. Lecture 26: Network Simplex, Transshipment: initial trees 69 1.25. Lecture 27: an example on network simplex 73 1.26. Lecture 28: Upper bounded Transshipment problems 75 1.27. Lecture 29: Upper bounded network II 76 1.28. Lecture 30: Network flows 78 1.29. Network simplex on maximum flows problems 78 1.30. Lecture 31: Network flows, part 2 80 1.31. Lecture 32: An example on maximum flows 82 1.32. Lecture 33: Applications of network simplex 83 1.33. Lecture 34: More applications of the network simplex 87 1.34. Lecture 35: Transportation problems 89 1.35. Lecture 36: An transport problem 94 1.36. Lecture 37: Transport example, second day 95 1.37. Lecture 38: Integer Programming 96 1.38. Lectures 39, 40: Gr¨obnerbases and Buchberger algorithm 98 1.39. Lecture 41: Gr¨obnerbases with Maple 100 1 2 1.40. Lectures 42, 43, 44: Review of final material, questions 104 421 COURSE NOTES ULI WALTHER Abstract. Textbook: V. Chvatal, \Linear Programming". grade: 30% midterm, 30% homework, 40% final. homework collected each Thursday in class or in my office by 3pm. 1. The Simplex Method What is linear programming? 1.1. Lecture 1: Introduction. 1.1.1. Some examples: Example 1.1 (Knapsack problem). Looking for x1; : : : ; xn such that x1c1 + ::: + xncn ! max x1a1 + ::: + xnan ≤ A (the knapsack) xi = 0; 18i Example 1.2 (Transportation problem). Looking for x1;1; : : : ; xm;n such that m n X X ci;jxi;j ! min (fuel from i to j) i=1 j=1 n X xi;j = ri8i (producers) j=1 m X xi;j = si8j (consumers) i=1 xi;j ≥ 08i; j Example 1.3 (matrix games). 2 players play against each other a version of stone-paper-scissors. The outcome is decided by how they choose their strategies, no random components. The question is how to choose the strat- egy. Minimize losses in worst case scenario? Maximize wins in best case? Maximize wins in worst case? Minimize losses in best case? When is a game fair? For example, 1 2 ULI WALTHER a b c d A 0 2 -3 0 B -2 0 0 3 C 3 0 0 -4 D 0 -3 4 0 A,. ,d are the possible choices the players Bob and Alice have. Tabulated are the winnings of Alice. How should they play? Example 1.4 (Isoperimetric problem). Amongst all closed curves in the plane with circumference 1 meter, which curve encloses the largest area? (This we will not answer!) Example 1.5 (Garbage removal). Given a weighted graph, find a cheapest (closed?) path that travels along all edges. We may talk about that. 1.1.2. Some geometric remarks. Let us solve 2x + 3y ! max x + 3y ≤ 6 x ≤ 12 x − y ≤ 1 3x + 2y ≤ 6 x; y ≥ 0: (2,3) is the direction in which we maximize. The lines give a finite region with straight lines and corners as boundary. The max will be taken in a corner (unless the maximizing direction is perpendicular to a boundary ! degeneracy). 421 COURSE NOTES 3 1.2. Lecture 2: Notation, Background, History. Definition 1.6. A convex set C is a set for which all line segments between points of C are completely in C as well. Show some examples of convex sets. n n Definition 1.7. A point in R n C (the complement of C in R ) belongs to n the boundary of C ⊆ R if one can get arbitrarily close to P while staying n in C.A closed set C ⊆ R is a set that contains its boundary. n A half space of R is the collection of points satisfying a single linear inequality. A polyhedron or polytope is the intersection of a finite number of half spaces. Polyhedra are convex, and closed. Theorem 1.8. Let C be closed. A linear function f on C takes its maximum and minimum at a point on the boundary, or at infinity. For polyhedra, maxima occur in corners or at infinity. Proof. If f is constant there is nothing to prove. n Let L be a line in R that goes through P 2 C and pick it in such a way that f grows. Then either, we get at some point to the end of C. Then the maximum point on L is part of the boundary of C, which as C is closed is part of C. Or, we never get to the end of C, in which case f is unbounded on C. Hence, no matter what P is, there is always a better point than P on the boundary. (Or f is unbounded.) So the max is on the boundary, and we know that is part of C. 2 In principle: given any number (say m) of conditions (the inequalities) in (say) n < m variables, • choose n conditions, • read them as equalities, • solve for the xi, • check if the other inequalities are OK (admissible solution) • and if so, calculate the objective function. Once this has been done, compare all results and pick the best. In theory linear optimization (with linear constraints) is trivial. Problem: in praxis, one often has hundreds of variables to thousands of 2000 constraints. Let's say n = 500, m = 2000. Then there are 500 = at least 10100 corners. If we could do 1010 per second (that is about 1,000,000 of what can be done) it would take so much time (1082 years) that a snail could go 1054 times from one end of the universe to the other. Thus, one needs a clever way of checking. Strategy: start in some corner, and move to a better one. This can in the 2000/500 example usually be done in a couple of hundreds of steps. 4 ULI WALTHER Example 1.9 (A diet problem:). 3x1 + 24x2 + 13x3 + 9x4 + 20x5 + 19x6 ! min price x1 ≤ 4 (oatmeal) x2 ≤ 3 (chicken) x3 ≤ 2 (eggs) x4 ≤ 8 (mil)k x5 ≤ 2 (cherry pie) x6 ≤ 2 (pork+beans) 110x1 + 205x2 + 160x3 + 160x4 + 420x5 + 260xx ≥ 2000 (calories) 4x1 + 32x2 + 13x3 + 8x4 + 4x5 + 14x6 ≥ 55 (protein, g) 2x1 + 12x2 + 54x3 + 285x4 + 22x5 + 80x6 ≥ 800 (calcium, mg) xi ≥ 08i The first six are dictated by taste, the other 3 by nutritionists. Without the taste constraints, one could have x6 = 10, all others zero (gives 1.90$). With them, 8 = x4; 2 = x5 works(gives 1.12$). But is it the cheapest? Explain: objective (or cost) function, linear equation (constraint), non- negativity constraints, optimal solution, optimal value, feasible/infeasible solution/problem, unbounded problem, decision variables. Explain how unbounded and degenerate problems and infeasible ones come about. Remark 1.10. George Dantzig (1947) made linear programming a science. Before, various people thought about it and recognized the importance (Fourier), but could not make it efficient. Kantorovich (1939) had good ideas like Dantzig, but they were not published. 1975, 2 mathematicians became Nobel prize winners in economics. one a student of Dantzig. The simplex algorithm (next) works most of the time really well, but sometimes is awful. The probability of an awful case is zero. There does exist an algorithm (Khachian,1979) that is never awful, but it is almost always beat by the simplex algorithm. Means: with probability zero simplex loses. 421 COURSE NOTES 5 Homework 1 (for week 2). 2 (1) Find an example of an optimization problem on a region in R with a linear objective function f that is bounded on C but where there is no max in C. (This will by our theorem require a non-closed C). Prove that there is no max on C. (2) Find an example of an optimization problem with a closed C where the max of f is not on the boundary and not at infinity. (By the theorem, this will require a non-linear f.) Prove that the max is where you claim it to be. (3) Solve the following linear program graphically and explain why the max is taken where you claim it is. x1 ≤ 4 2x2 ≤ 12 3x1 + 2x2 ≤ 18 x1; x2 ≥ 0 3x1 + 5x2 ! max (4) Solve the following linear program graphically and explain why the max is taken where you claim it is. x2 ≤ 10 x1 + 2x2 ≤ 15 x1 + x2 ≤ 12 5x1 + 3x2 ≤ 45 x1; x2 ≥ 0 10x1 + 20x2 ! max (5) The Southern Confederation of Kibbutzim (SCK) is a group of three kibbutzim (communal farming communities) in Israel.
Recommended publications
  • A Generalized Dual Phase-2 Simplex Algorithm1
    A Generalized Dual Phase-2 Simplex Algorithm1 Istvan´ Maros Department of Computing, Imperial College, London Email: [email protected] Departmental Technical Report 2001/2 ISSN 1469–4174 January 2001, revised December 2002 1This research was supported in part by EPSRC grant GR/M41124. I. Maros Phase-2 of Dual Simplex i Contents 1 Introduction 1 2 Problem statement 2 2.1 The primal problem . 2 2.2 The dual problem . 3 3 Dual feasibility 5 4 Dual simplex methods 6 4.1 Traditional algorithm . 7 4.2 Dual algorithm with type-1 variables present . 8 4.3 Dual algorithm with all types of variables . 9 5 Bound swap in dual 11 6 Bound Swapping Dual (BSD) algorithm 14 6.1 Step-by-step description of BSD . 14 6.2 Work per iteration . 16 6.3 Degeneracy . 17 6.4 Implementation . 17 6.5 Features . 18 7 Two examples for the algorithmic step of BSD 19 8 Computational experience 22 9 Summary 24 10 Acknowledgements 25 Abstract Recently, the dual simplex method has attracted considerable interest. This is mostly due to its important role in solving mixed integer linear programming problems where dual feasible bases are readily available. Even more than that, it is a realistic alternative to the primal simplex method in many cases. Real world linear programming problems include all types of variables and constraints. This necessitates a version of the dual simplex algorithm that can handle all types of variables efficiently. The paper presents increasingly more capable dual algorithms that evolve into one which is based on the piecewise linear nature of the dual objective function.
    [Show full text]
  • Matroidal Subdivisions, Dressians and Tropical Grassmannians
    Matroidal subdivisions, Dressians and tropical Grassmannians vorgelegt von Diplom-Mathematiker Benjamin Frederik Schröter geboren in Frankfurt am Main Von der Fakultät II – Mathematik und Naturwissenschaften der Technischen Universität Berlin zur Erlangung des akademischen Grades Doktor der Naturwissenschaften – Dr. rer. nat. – genehmigte Dissertation Promotionsausschuss: Vorsitzender: Prof. Dr. Wilhelm Stannat Gutachter: Prof. Dr. Michael Joswig Prof. Dr. Hannah Markwig Senior Lecturer Ph.D. Alex Fink Tag der wissenschaftlichen Aussprache: 17. November 2017 Berlin 2018 Zusammenfassung In dieser Arbeit untersuchen wir verschiedene Aspekte von tropischen linearen Räumen und deren Modulräumen, den tropischen Grassmannschen und Dressschen. Tropische lineare Räume sind dual zu Matroidunterteilungen. Motiviert durch das Konzept der Splits, dem einfachsten Fall einer polytopalen Unterteilung, wird eine neue Klasse von Matroiden eingeführt, die mit Techniken der polyedrischen Geometrie untersucht werden kann. Diese Klasse ist sehr groß, da sie alle Paving-Matroide und weitere Matroide enthält. Die strukturellen Eigenschaften von Split-Matroiden können genutzt werden, um neue Ergebnisse in der tropischen Geometrie zu erzielen. Vor allem verwenden wir diese, um Strahlen der tropischen Grassmannschen zu konstruieren und die Dimension der Dressschen zu bestimmen. Dazu wird die Beziehung zwischen der Realisierbarkeit von Matroiden und der von tropischen linearen Räumen weiter entwickelt. Die Strahlen einer Dressschen entsprechen den Facetten des Sekundärpolytops eines Hypersimplexes. Eine besondere Klasse von Facetten bildet die Verallgemeinerung von Splits, die wir Multi-Splits nennen und die Herrmann ursprünglich als k-Splits bezeichnet hat. Wir geben eine explizite kombinatorische Beschreibung aller Multi-Splits eines Hypersimplexes. Diese korrespondieren mit Nested-Matroiden. Über die tropische Stiefelabbildung erhalten wir eine Beschreibung aller Multi-Splits für Produkte von Simplexen.
    [Show full text]
  • A Truncated SQP Algorithm for Large Scale Nonlinear Programming Problems
    ' : .v ' : V. • ' '^!V'> r" '%': ' W ' ::r'' NISTIR 4900 Computing and Applied Mathematics Laboratory A Truncated SQP Algorithm for Large Scale Nonlinear Programming Problems P.T. Boggs, J.W. Tolle, A.J. Kearsley August 1992 Technology Administration U.S. DEPARTMENT OF COMMERCE National Institute of Standards and Technology Gaithersburg, MD 20899 “QC— 100 .U56 4900 1992 Jc. NISTIR 4900 A Truncated SQP Algorithm for Large Scale Nonlinear Programming Problems P. T. Boggs J. W. Tone A. J. Kearsley U.S. DEPARTMENT OF COMMERCE Technology Administration National Institute of Standards and Technology Computing and Applied Mathematics Laboratory Applied and Computational Mathematics Division Gaithersburg, MD 20899 August 1992 U.S. DEPARTMENT OF COMMERCE Barbara Hackman Franklin, Secretary TECHNOLOGY ADMINISTRATION Robert M. White, Under Secretary for Technology NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY John W. Lyons, Director si.- MT-- ’•,•• /'k /-.<• -Vj* #' ^ ( "> M !f>' J ' • ''f,i sgsHv.w,^. ^ , .!n'><^.'". tr 'V 10 i^ailCin'i.-'J/ > ' it i>1l • •»' W " %'. ' ' -fft ?" . , - '>! .-' .S A Truncated SQP Algorithm for Large Scale Nonlinear Programming Problems * Paul T. Boggs ^ Jon W. ToUe ^ Anthony J. Kearsley § August 4, 1992 Abstract We consider the inequality constrained nonlinear programming problem and an SQP algorithm for its solution. We are primarily concerned with two aspects of the general procedure, namely, the approximate solution of the quadratic program, and the need for an appropriate merit function. We first describe an (iterative) interior-point method for the quadratic programming subproblem that, no matter when it its terminated, yields a descent direction for a suggested new merit function. An algorithm based on ideas from trust-region and truncated Newton methods, is suggested and some of our preliminary numerical results are discussed.
    [Show full text]
  • Practical Implementation of the Active Set Method for Support Vector Machine Training with Semi-Definite Kernels
    PRACTICAL IMPLEMENTATION OF THE ACTIVE SET METHOD FOR SUPPORT VECTOR MACHINE TRAINING WITH SEMI-DEFINITE KERNELS by CHRISTOPHER GARY SENTELLE B.S. of Electrical Engineering University of Nebraska-Lincoln, 1993 M.S. of Electrical Engineering University of Nebraska-Lincoln, 1995 A dissertation submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy in the Department of Electrical Engineering and Computer Science in the College of Engineering and Computer Science at the University of Central Florida Orlando, Florida Spring Term 2014 Major Professor: Michael Georgiopoulos c 2014 Christopher Sentelle ii ABSTRACT The Support Vector Machine (SVM) is a popular binary classification model due to its superior generalization performance, relative ease-of-use, and applicability of kernel methods. SVM train- ing entails solving an associated quadratic programming (QP) that presents significant challenges in terms of speed and memory constraints for very large datasets; therefore, research on numer- ical optimization techniques tailored to SVM training is vast. Slow training times are especially of concern when one considers that re-training is often necessary at several values of the models regularization parameter, C, as well as associated kernel parameters. The active set method is suitable for solving SVM problem and is in general ideal when the Hessian is dense and the solution is sparse–the case for the `1-loss SVM formulation. There has recently been renewed interest in the active set method as a technique for exploring the entire SVM regular- ization path, which has been shown to solve the SVM solution at all points along the regularization path (all values of C) in not much more time than it takes, on average, to perform training at a sin- gle value of C with traditional methods.
    [Show full text]
  • A Bound Strengthening Method for Optimal Transmission Switching in Power Systems
    1 A Bound Strengthening Method for Optimal Transmission Switching in Power Systems Salar Fattahi∗, Javad Lavaei∗;+, and Alper Atamturk¨ ∗ ∗ Industrial Engineering and Operations Research, University of California, Berkeley + Tsinghua-Berkeley Shenzhen Institute, University of California, Berkeley. Abstract—This paper studies the optimal transmission switch- transmission switches) in the system. Furthermore, The Energy ing (OTS) problem for power systems, where certain lines are Policy Act of 2005 explicitly addresses the “difficulties of fixed (uncontrollable) and the remaining ones are controllable siting major new transmission facilities” and calls for the via on/off switches. The goal is to identify a topology of the power grid that minimizes the cost of the system operation while utilization of better transmission technologies [2]. satisfying the physical and operational constraints. Most of the Unlike in the classical network flows, removing a line from existing methods for the problem are based on first converting a power network may improve the efficiency of the network the OTS into a mixed-integer linear program (MILP) or mixed- due to physical laws. This phenomenon has been observed integer quadratic program (MIQP), and then iteratively solving and harnessed to improve the power system performance by a series of its convex relaxations. The performance of these methods depends heavily on the strength of the MILP or MIQP many authors. The notion of optimally switching the lines of formulations. In this paper, it is shown that finding the strongest a transmission network was introduced by O’Neill et al. [3]. variable upper and lower bounds to be used in an MILP or Later on, it has been shown in a series of papers that the MIQP formulation of the OTS based on the big-M or McCormick incorporation of controllable transmission switches in a grid inequalities is NP-hard.
    [Show full text]
  • Towards a Practical Parallelisation of the Simplex Method
    Towards a practical parallelisation of the simplex method J. A. J. Hall 23rd April 2007 Abstract The simplex method is frequently the most efficient method of solv- ing linear programming (LP) problems. This paper reviews previous attempts to parallelise the simplex method in relation to efficient serial simplex techniques and the nature of practical LP problems. For the major challenge of solving general large sparse LP problems, there has been no parallelisation of the simplex method that offers significantly improved performance over a good serial implementation. However, there has been some success in developing parallel solvers for LPs that are dense or have particular structural properties. As an outcome of the review, this paper identifies scope for future work towards the goal of developing parallel implementations of the simplex method that are of practical value. Keywords: linear programming, simplex method, sparse, parallel comput- ing MSC classification: 90C05 1 Introduction Linear programming (LP) is a widely applicable technique both in its own right and as a sub-problem in the solution of other optimization problems. The simplex method and interior point methods are the two main approaches to solving LP problems. In a context where families of related LP problems have to be solved, such as integer programming and decomposition methods, and for certain classes of single LP problems, the simplex method is usually more efficient. 1 The application of parallel and vector processing to the simplex method for linear programming has been considered since the early 1970's. However, only since the beginning of the 1980's have attempts been made to develop implementations, with the period from the late 1980's to the late 1990's seeing the greatest activity.
    [Show full text]
  • A Scenario-Based Branch-And-Bound Approach for MES Scheduling in Urban Buildings Mainak Dan, Student Member, Seshadhri Srinivasan, Sr
    1 A Scenario-based Branch-and-Bound Approach for MES Scheduling in Urban Buildings Mainak Dan, Student Member, Seshadhri Srinivasan, Sr. Member, Suresh Sundaram, Sr. Member, Arvind Easwaran, Sr. Member and Luigi Glielmo, Sr. Member Abstract—This paper presents a novel solution technique for Storage Units scheduling multi-energy system (MES) in a commercial urban PESS Power input to the electrical storage [kW]; building to perform price-based demand response and reduce SoCESS State of charge of the electrical storage [kW h]; energy costs. The MES scheduling problem is formulated as a δ mixed integer nonlinear program (MINLP), a non-convex NP- ESS charging/discharging status of the electrical stor- hard problem with uncertainties due to renewable generation age; and demand. A model predictive control approach is used PTES Cooling energy input to the thermal storage [kW]; to handle the uncertainties and price variations. This in-turn SoCTES State of charge of the thermal storage [kW h]; requires solving a time-coupled multi-time step MINLP dur- δTES charging/discharging status of the thermal storage; ing each time-epoch which is computationally intensive. This T ◦C investigation proposes an approach called the Scenario-Based TES Temperature inside the TES [ ]; Branch-and-Bound (SB3), a light-weight solver to reduce the Chiller Bank computational complexity. It combines the simplicity of convex −1 programs with the ability of meta-heuristic techniques to handle m_ C;j Cool water mass-flow rate of chiller j [kg h ]; complex nonlinear problems. The performance of the SB3 solver γC;j Binary ON-OFF status of chiller j; is validated in the Cleantech building, Singapore and the results PC Power consumed in the chiller bank [kW]; demonstrate that the proposed algorithm reduces energy cost by QC;j Thermal energy supplied by chiller j [kW]; about 17.26% and 22.46% as against solving a multi-time step heuristic optimization model.
    [Show full text]
  • The Simplex Method for Linear Programming Problems
    Appendix A THE SIMPLEX METHOD FOR LINEAR PROGRAMMING PROBLEMS A.l Introduction This introduction to the simplex method is along the lines given by Chvatel (1983). Here consider the maximization problem: maximize Z = c-^x such that Ax < b, A an m x n matrix (A.l) 3:^2 > 0, i = 1,2, ...,n. Note that Ax < b is equivalent to ^1^=1 ^ji^i ^ ^j? j = 1, 2,..., m. Introduce slack variables Xn-^i,Xn-^2^ "",Xn-j-m > 0 to transform the in- 234 APPENDIX A. equality constraints to equality constraints: aiiXi + . + ainXn + Xn+l = ^1 a2lXi + . + a2nXn + Xn-\-2 = h (A.2) or [A;I]x = b where x = [a;i,a;2, ...,Xn+m]^,b = [^i, 62, ...,öm]^, and xi, 0:2,..., Xn > 0 are the original decision variables and Xn-\-i^Xn-\-2^ "">^n-\-m ^ 0 the slack variables. Now assume that bi > 0 for all i, To start the process an initial feasible solution is then given by: Xn+l = bi Xn+2 = b2 with xi = X2 = " ' = Xn = 0. In this case we have a feasible origin. We now write system (A.2) in the so called standard tableau format: Xn-^l = 61 - anXi - ... - ainXn > 0 (A.3) Z = CiXi + C2X2 + . + Cn^n The left side contains the basic variables^ in general 7^ 0, and the right side the nonbasic variables^ all = 0. The last line Z denotes the objective function (in terms of nonbasic variables). SIMPLEX METHOD FOR LP PROBLEMS 235 In a more general form the tableau can be written as XB2 = b2- a2lXNl - ..
    [Show full text]
  • The Big-M Method with the Numerical Infinite M
    Optimization Letters https://doi.org/10.1007/s11590-020-01644-6 ORIGINAL PAPER The Big-M method with the numerical infinite M Marco Cococcioni1 · Lorenzo Fiaschi1 Received: 30 March 2020 / Accepted: 8 September 2020 © The Author(s) 2020 Abstract Linear programming is a very well known and deeply applied field of optimization theory. One of its most famous and used algorithms is the so called Simplex algorithm, independently proposed by Kantoroviˇc and Dantzig, between the end of the 30s and the end of the 40s. Even if extremely powerful, the Simplex algorithm suffers of one initialization issue: its starting point must be a feasible basic solution of the problem to solve. To overcome it, two approaches may be used: the two-phases method and the Big-M method, both presenting positive and negative aspects. In this work we aim to propose a non-Archimedean and non-parametric variant of the Big-M method, able to overcome the drawbacks of its classical counterpart (mainly, the difficulty in setting the right value for the constant M). We realized such extension by means of the novel computational methodology proposed by Sergeyev, known as Grossone Methodol- ogy. We have validated the new algorithm by testing it on three linear programming problems. Keywords Big-M method · Grossone methodology · Infinity computer · Linear programming · Non-Archimedean numerical computing 1 Introduction Linear Programming (LP) is a branch of optimization theory which studies the min- imization (or maximization) of a linear objective function, subject to linear equality and/or linear inequality constraints. LP has found a lot of successful applications both in theoretical and real world contexts, especially in countless engineering applications.
    [Show full text]
  • Two Dimensional Search Algorithms for Linear Programming
    University of Nebraska at Omaha DigitalCommons@UNO Mathematics Faculty Publications Department of Mathematics 8-2019 Two dimensional search algorithms for linear programming Fabio Torres Vitor Follow this and additional works at: https://digitalcommons.unomaha.edu/mathfacpub Part of the Mathematics Commons Two dimensional search algorithms for linear programming by Fabio Torres Vitor B.S., Maua Institute of Technology, Brazil, 2013 M.S., Kansas State University, 2015 AN ABSTRACT OF A DISSERTATION submitted in partial fulfillment of the requirements for the degree DOCTOR OF PHILOSOPHY Department of Industrial and Manufacturing Systems Engineering Carl R. Ice College of Engineering KANSAS STATE UNIVERSITY Manhattan, Kansas 2019 Abstract Linear programming is one of the most important classes of optimization problems. These mathematical models have been used by academics and practitioners to solve numerous real world applications. Quickly solving linear programs impacts decision makers from both the public and private sectors. Substantial research has been performed to solve this class of problems faster, and the vast majority of the solution techniques can be categorized as one dimensional search algorithms. That is, these methods successively move from one solution to another solution by solving a one dimensional subspace linear program at each iteration. This dissertation proposes novel algorithms that move between solutions by repeatedly solving a two dimensional subspace linear program. Computational experiments demonstrate the potential of these newly developed algorithms and show an average improvement of nearly 25% in solution time when compared to the corresponding one dimensional search version. This dissertation's research creates the core concept of these two dimensional search algorithms, which is a fast technique to determine an optimal basis and an optimal solution to linear programs with only two variables.
    [Show full text]
  • GOPALAN COLLEGE of ENGINEERING and MANAGEMENT Department of Computer Science and Engineering
    Appendix - C GOPALAN COLLEGE OF ENGINEERING AND MANAGEMENT Department of Computer Science and Engineering Academic Year: 2016-17 Semester: EVEN COURSE PLAN Semester: VI Subject Code& Name: 10CS661 & OPERATIONS RESERACH Name of Subject Teacher: J.SOMASEKAR Name of Subject Expert (Reviewer): SUPARNA For the Period: From: 13-02-17 to 02-06-17 Details of Book to be referred: Text Books T1: Frederick S. Hillier and Gerald J. Lieberman, Introduction to Operations Research, 8thEdition, Tata McGraw Hill, 2005. Reference Books R1: S.Kalavathy, Operations Research, 4th edition, vikas publishing house Pvt.Ltd. R2: Hamdy A Taha: Operations Research: An Introduction, 8th edition, pearson education, 2007. Practical Deviation How Made Remarks Book Lecture Applications & Brief Planned Executed Reasons Good / by HOD Topic Planned refereed with No objectives Date Date thereof Reciprocate page no. arrangement Introduction to the subject OR Objective: Introduce UNIT-1 : the concept of Linear T1:1-2 1. 13-02-17 Linear Programming : programming and Introduction solving by using graphical method. Scope and limitations of Also formulation of T1: 1-4 2. OR. Also applications of LPP for the data T2:2,4 14-02-17 OR provided by the Mathematical model of organization for T1: 27-35 3. LPP and solution by optimization T2:19-23 15-02-17 graphical method Application: minimization of T1: 27-33 4. LPP problems solutions by 16-02-17 graphical method. product cost, T2:21-24 Special cases of graphical maximization of T1: 27-33 method solution namely profit in company T2:25-26 5. unbound solution, no (or) industry 20-02-17 feasible solution and multiple solutions.
    [Show full text]
  • Operations Research 10CS661 OPERATIONS RESEARCH Subject
    Operations Research 10CS661 OPERATIONS RESEARCH Subject Code: 10CS661 I.A. Marks : 25 Hours/Week : 04 Exam Hours: 03 Total Hours : 52 Exam Marks: 100 PART - A UNIT – 1 6 Hours Introduction, Linear Programming – 1: Introduction: The origin, nature and impact of OR; Defining the problem and gathering data; Formulating a mathematical model; Deriving solutions from the model; Testing the model; Preparing to apply the model; Implementation . Introduction to Linear Programming: Prototype example; The linear programming (LP) model. UNIT – 2 7 Hours LP – 2, Simplex Method – 1: Assumptions of LP; Additional examples. The essence of the simplex method; Setting up the simplex method; Algebra of the simplex method; the simplex method in tabular form; Tie breaking in the simplex method UNIT – 3 6 Hours Simplex Method – 2: Adapting to other model forms; Post optimality analysis; Computer implementation Foundation of the simplex method. UNIT – 4 7Hours Simplex Method – 2, Duality Theory: The revised simplex method, a fundamental insight. The essence of duality theory; Economic interpretation of duality, Primal dual relationship; Adapting to other primal forms PART - B UNIT – 5 7 Hours Duality Theory and Sensitivity Analysis, Other Algorithms for LP : The role of duality in sensitive analysis; The essence of sensitivity analysis; Applying sensitivity analysis. The dual simplex method; Parametric linear programming; The upper bound technique. UNIT – 6 7 Hours Transportation and Assignment Problems: The transportation problem; A streamlined simplex method for the transportation problem; The assignment problem; A special algorithm for the assignment problem. DEPT. OF CSE, SJBIT 1 Operations Research 10CS661 UNIT – 7 6 Hours Game Theory, Decision Analysis: Game Theory: The formulation of two persons, zero sum games; Solving simple games- a prototype example; Games with mixed strategies; Graphical solution procedure; Solving by linear programming, Extensions.
    [Show full text]