Mixed-Integer Linear Programming (MILP): Branch-And-Bound Search

Total Page:16

File Type:pdf, Size:1020Kb

Mixed-Integer Linear Programming (MILP): Branch-And-Bound Search Solving Discrete Optimization Problems Standard Mixed-Integer Linear Programming (MILP) Formulation: ∆ min z = cTx + dTy Mixed-Integer Linear Programming (MILP): x,y Branch-and-Bound Search ≤ s.t. Ax + Ey = b ≥ <[email protected]> Benoˆıt Chachuat n xmin ≤ x≤ xmax, y ∈ {0, 1} y McMaster University Department of Chemical Engineering We seek a rigorous solution ChE 4G03: Optimization in Chemical Engineering The concept of local derivative information (or gradient) does not exist for discrete variables! The basic numerical optimization paradigm (improving search) applies only when we know/assume the values of all integer variables We need a new approach for problems with integer variables! Benoˆıt Chachuat (McMaster University) MILP: Branch-and-Bound Search 4G03 1 / 23 Benoˆıt Chachuat (McMaster University) MILP: Branch-and-Bound Search 4G03 2 / 23 Solving Discrete Optimization Problems Solving Discrete Optimization Problems That discrete optimization models are more difficult to solve than Exponential Growth with Total Enumeration: continuous models may appear counter-intuitive For n binary variables, After all, a discrete model only has a finite number of choices for ◮ n = 10: decision variables! ◮ n = 20: Total Enumeration: ◮ n = 30: Solve a discrete optimization by trying all possible combinations and keep whichever is best 1 With no more than a few discrete variables, total enumeration is often the most effective solution method Class Exercise: Solve the following discrete optimization model by total enumeration: 2 But, exponential growth makes total enumeration impractical with Case Objective Case Objective max 7y1 +4y2 + 19y3 models having more than a handful of discrete decision variables y s.t. y1 + y2 ≤ 2 y2 + y3 ≤ 1 Back to the Drawing Board! y1, y2, y3 ∈{0, 1} Benoˆıt Chachuat (McMaster University) MILP: Branch-and-Bound Search 4G03 3 / 23 Benoˆıt Chachuat (McMaster University) MILP: Branch-and-Bound Search 4G03 4 / 23 Solving Discrete Optimization Problems Outline New Paradigm for Discrete Optimization Search Construct a sequence of related, simpler subproblems, the solutions of 1 Introduction which converges (finitely) to the original solution 2 Relaxations of Discrete Optimization Models LP Relaxations Use relaxations to define the subproblems ◮ E.g., relax the feasible region (LP relaxations) and/or the objective 3 Branch-and-Bound Search function (Lagrangian relaxations) Root Node The subproblems should be easier to solve than the original (since Terminating Partial Solutions many may have to be solved) Heuristics Each subproblem should yield a bound on the original optimal 4 Final Words solution value: ◮ Lower bound for a minimize problem ◮ Upper bound for a maximize problem For additional details, see Rardin (1998), Chapter 12 The number of subproblems solved should be much smaller than with the complete enumeration method Benoˆıt Chachuat (McMaster University) MILP: Branch-and-Bound Search 4G03 5 / 23 Benoˆıt Chachuat (McMaster University) MILP: Branch-and-Bound Search 4G03 6 / 23 Relaxations of Discrete Optimization Models Linear Programming Relaxations Constraint Relaxation LP Relaxations Model (R) is said to be a constraint relaxation of model (P) if: LP relaxations of a MILP model are formed by treating any discrete every feasible solution to (P) is also feasible in (R) variables as continuous, while retaining all other constraints: (P) and (R) have the same objective function yi ∈ {0, 1} =⇒ 0 ≤ yi ≤ 1 Original MILP Model: Relax. #1: Drop constraint Motivations: min 7x1 + x2 + 3y1 + 6y2 min 7x1 + x2 + 3y1 + 6y2 x,y x,y Bring all the power of LP to bear on analysis of discrete models s.t. + 10 + 2 + 100 s.t. + 10 + 2 + 100 x1 x2 y1 y2 ≥ x1 x2 y1 y2 ≥ By far the most used relaxation forms y1 + y2 ≤ 1 x1, x2 ≥ 0, y1, y2 ∈ {0, 1} feasible solutions x1, x2 ≥ 0, y1, y2 ∈ {0, 1} in relaxed model Properties: Relax. #2: Relax constraint RHS Relax. #3: Remove integrality feasible solutions in original model min 7x1 + x2 + 3y1 + 6y2 min 7x1 + x2 + 3y1 + 6y2 x,y x,y Are LP relaxations guaranteed optimum point to yield valid relaxations? s.t. x1 + 10x2 + 2y1 + y2 ≥ 50 s.t. x1 + 10x2 + 2y1 + y2 ≥ 100 y1 + y2 ≤ 1 y1 + y2 ≤ 1 ◮ x1, x2 ≥ 0, y1, y2 ∈ {0, 1} x1, x2 ≥ 0, 0 ≤ y1, y2 ≤ 1 Benoˆıt Chachuat (McMaster University) MILP: Branch-and-Bound Search 4G03 7 / 23 Benoˆıt Chachuat (McMaster University) MILP: Branch-and-Bound Search 4G03 8 / 23 Properties of LP Relaxations (cont’d) Properties of LP Relaxations (cont’d) Solution Bounds from LP Relaxations Proving Infeasibility with Relaxations The optimal value of an LP relaxation of a maximize MILP model If an LP relaxation is infeasible, so is the MILP model it relaxes yields an upper bound on the optimal value of that model The optimal value of an LP relaxation of a minimize MILP model Question: Can we conclude anything regarding the feasibility of the ILP yields a lower bound model if the LP relaxation has a feasible solution? Class Exercise: Consider the following ILP model: max y1 + y2 + y3 Class Exercise: Use LP relaxations to help establish infeasibility of the y following ILP models: s.t. y1 + y2 ≤ 1 y1 + y3 ≤ 1 min 8y1 +2y2 min y1 +2y2 y y y2 + y3 ≤ 1 y y y y s.t. 1 − 2 ≥ 2 s.t. 4 1 +2 2 ≥ 1 y1, y2, y3 ∈{0, 1} − y1 + y2 ≥−1 4y1 +4y2 ≤ 3 1 Formulate and solve an LP relaxation of the ILP model (by inspection) y1, y2, ∈{0, 1, 2,...} y1, y2, ∈{0, 1} 2 Solve the ILP model (by inspection) and compare the optimal values Benoˆıt Chachuat (McMaster University) MILP: Branch-and-Bound Search 4G03 9 / 23 Benoˆıt Chachuat (McMaster University) MILP: Branch-and-Bound Search 4G03 10 / 23 Properties of LP Relaxations (cont’d) Branch-and-Bound Search Branch-and-Bound algorithms combine partial enumeration strategy Optimal Solutions from Relaxations with relaxation techniques: If an optimal solution to an LP relaxation is feasible in the MILP model it Classes of solutions are formed and investigated to determine whether relaxes, the solution is optimal in that model too they can or cannot contain optimal solutions This search is conducted by analyzing associated relaxations Class Exercise: Solve LP relaxations for the following ILP models: Only promising classes are searched in further details max y1 + y2 + y3 y min 20y1 +8y2 +3y3 Partial Solutions and Completions s.t. y1 + y2 ≤ 1 y A partial solution has some discrete decision variables fixed, while y + y ≤ 1 s.t. y + y + y ≤ 1 1 3 1 2 3 other left free (denoted by #) y2 + y3 ≤ 1 y1, y2, y3 ∈{0, 1} The completions of a partial solution are the possible full solutions y , y , y ∈{0, 1} 1 2 3 agreeing with the partial solution on all fixed variables Is the relaxation optimum also optimal in the MILP model? Example: y = (1, #, 0, #) is a partial solution with y = 1 and y = 0, Heuristics: LP relaxations may produce optimal solutions that are easily 1 3 while y and y are free; its completions are (1, 0, 0, 0), (1, 1, 0, 0), “rounded” to good feasible solution for the corresponding MILP model 2 4 (1, 0, 0, 1), and (1, 1, 0, 1) Benoˆıt Chachuat (McMaster University) MILP: Branch-and-Bound Search 4G03 11 / 23 Benoˆıt Chachuat (McMaster University) MILP: Branch-and-Bound Search 4G03 12 / 23 Branch-and-Bound Tree Getting Started: The Root Node Partial solution with y = (#, #, ...) 0 all discrete variables free Root Node y = (#, #, ...) 0 y1 = 0 y1 = 1 B&B search begins at initial or root partial solution ∆ Partial solution with (0) , ,... y = (0, #, ...) 1 2 y = (1, #, ...) y = (# # ) with all discrete variables free 1 discrete variable fixed y y 2 = 0 2 = 1 min z =∆ cTx + dTy min z =∆ cTx + dTy Partial solution with x,y x,y y = (0, 0, #, ...) 3 4 y = (0, 1, #, ...) 2 discrete variables fixed ≤ LP ≤ s.t. Ax + Ey = b s.t. Ax + Ey = b yny = 0 ⇒ = Partial solution with ≥ ≥ y = (0, 0, ...) k relaxation all discrete variables fixed xmin ≤ x≤ xmax xmin ≤ x≤ xmax y ∈{0, 1}ny 0 ≤ y ≤ 1 Nodes of the B&B tree represent partial solution ◮ Numbers indicate the sequence in which nodes are investigated Solution of the LP relaxation at the root node provides: n ◮ 2 ny y i ny Total number of nodes is: 1+2+2 + · · · +2 = i=0 2 > 2 ! • A lower bound on the MILP (global) optimum for a minimize problem Edges of the B&B tree specify how variables are fixed:P branch part • An upper bound for a maximize problem Benoˆıt Chachuat (McMaster University) MILP: Branch-and-Bound Search 4G03 13 / 23 Benoˆıt Chachuat (McMaster University) MILP: Branch-and-Bound Search 4G03 14 / 23 Outcomes from LP Relaxation Solution at the Root Node Intermediate Nodes 1 No feasible solution Candidate Problems Action: Terminate by Infeasibility — The MILP problem is itself The candidate problem associated with a fixed, ∀i ∈ F (k) yi k infeasible partial solution to an MILP model is the free, otherwise restricted model obtained by fixing the 2 All relaxed binary variables are either 0 or 1 at the optimum discrete variables as in the partial solution Action: Terminate by Completion — We have found an optimum for the MILP problem (We are very lucky!) ∆ T T ∆ min z = c x + d y min z = cTx + dTy x,y 3 Some relaxed binary variables have fractional value at the optimum x,y ≤ Action: Branch — Choose one of the relaxed variables, e.g., y , and ≤ 1 Partial solution s.t. Ax + Ey = b create two new nodes: s.t. Ax + Ey = b =⇒ ≥ Partial solution with ≥ (k) ∆ y = (#, #, ...) 0 F = fixed set xmin ≤ x≤ xmax all discrete variables free xmin ≤ x≤ xmax F (k) ny fixed, ∀i ∈ y1 = 0 y1 = 1 y ∈{0, 1} yi ∈{0, 1}, otherwise Partial solution with y = (0, #, ...) 1 2 y = (1, #, ...) 1 discrete variable
Recommended publications
  • A Lifted Linear Programming Branch-And-Bound Algorithm for Mixed Integer Conic Quadratic Programs Juan Pablo Vielma, Shabbir Ahmed, George L
    A Lifted Linear Programming Branch-and-Bound Algorithm for Mixed Integer Conic Quadratic Programs Juan Pablo Vielma, Shabbir Ahmed, George L. Nemhauser, H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, 765 Ferst Drive NW, Atlanta, GA 30332-0205, USA, {[email protected], [email protected], [email protected]} This paper develops a linear programming based branch-and-bound algorithm for mixed in- teger conic quadratic programs. The algorithm is based on a higher dimensional or lifted polyhedral relaxation of conic quadratic constraints introduced by Ben-Tal and Nemirovski. The algorithm is different from other linear programming based branch-and-bound algo- rithms for mixed integer nonlinear programs in that, it is not based on cuts from gradient inequalities and it sometimes branches on integer feasible solutions. The algorithm is tested on a series of portfolio optimization problems. It is shown that it significantly outperforms commercial and open source solvers based on both linear and nonlinear relaxations. Key words: nonlinear integer programming; branch and bound; portfolio optimization History: February 2007. 1. Introduction This paper deals with the development of an algorithm for the class of mixed integer non- linear programming (MINLP) problems known as mixed integer conic quadratic program- ming problems. This class of problems arises from adding integrality requirements to conic quadratic programming problems (Lobo et al., 1998), and is used to model several applica- tions from engineering and finance. Conic quadratic programming problems are also known as second order cone programming problems, and together with semidefinite and linear pro- gramming (LP) problems are special cases of the more general conic programming problems (Ben-Tal and Nemirovski, 2001a).
    [Show full text]
  • A Branch-And-Price Approach with Milp Formulation to Modularity Density Maximization on Graphs
    A BRANCH-AND-PRICE APPROACH WITH MILP FORMULATION TO MODULARITY DENSITY MAXIMIZATION ON GRAPHS KEISUKE SATO Signalling and Transport Information Technology Division, Railway Technical Research Institute. 2-8-38 Hikari-cho, Kokubunji-shi, Tokyo 185-8540, Japan YOICHI IZUNAGA Information Systems Research Division, The Institute of Behavioral Sciences. 2-9 Ichigayahonmura-cho, Shinjyuku-ku, Tokyo 162-0845, Japan Abstract. For clustering of an undirected graph, this paper presents an exact algorithm for the maximization of modularity density, a more complicated criterion to overcome drawbacks of the well-known modularity. The problem can be interpreted as the set-partitioning problem, which reminds us of its integer linear programming (ILP) formulation. We provide a branch-and-price framework for solving this ILP, or column generation combined with branch-and-bound. Above all, we formulate the column gen- eration subproblem to be solved repeatedly as a simpler mixed integer linear programming (MILP) problem. Acceleration tech- niques called the set-packing relaxation and the multiple-cutting- planes-at-a-time combined with the MILP formulation enable us to optimize the modularity density for famous test instances in- cluding ones with over 100 vertices in around four minutes by a PC. Our solution method is deterministic and the computation time is not affected by any stochastic behavior. For one of them, column generation at the root node of the branch-and-bound tree arXiv:1705.02961v3 [cs.SI] 27 Jun 2017 provides a fractional upper bound solution and our algorithm finds an integral optimal solution after branching. E-mail addresses: (Keisuke Sato) [email protected], (Yoichi Izunaga) [email protected].
    [Show full text]
  • Linear Programming
    Lecture 18 Linear Programming 18.1 Overview In this lecture we describe a very general problem called linear programming that can be used to express a wide variety of different kinds of problems. We can use algorithms for linear program- ming to solve the max-flow problem, solve the min-cost max-flow problem, find minimax-optimal strategies in games, and many other things. We will primarily discuss the setting and how to code up various problems as linear programs At the end, we will briefly describe some of the algorithms for solving linear programming problems. Specific topics include: • The definition of linear programming and simple examples. • Using linear programming to solve max flow and min-cost max flow. • Using linear programming to solve for minimax-optimal strategies in games. • Algorithms for linear programming. 18.2 Introduction In the last two lectures we looked at: — Bipartite matching: given a bipartite graph, find the largest set of edges with no endpoints in common. — Network flow (more general than bipartite matching). — Min-Cost Max-flow (even more general than plain max flow). Today, we’ll look at something even more general that we can solve algorithmically: linear pro- gramming. (Except we won’t necessarily be able to get integer solutions, even when the specifi- cation of the problem is integral). Linear Programming is important because it is so expressive: many, many problems can be coded up as linear programs (LPs). This especially includes problems of allocating resources and business 95 18.3. DEFINITION OF LINEAR PROGRAMMING 96 supply-chain applications. In business schools and Operations Research departments there are entire courses devoted to linear programming.
    [Show full text]
  • Integer Linear Programs
    20 ________________________________________________________________________________________________ Integer Linear Programs Many linear programming problems require certain variables to have whole number, or integer, values. Such a requirement arises naturally when the variables represent enti- ties like packages or people that can not be fractionally divided — at least, not in a mean- ingful way for the situation being modeled. Integer variables also play a role in formulat- ing equation systems that model logical conditions, as we will show later in this chapter. In some situations, the optimization techniques described in previous chapters are suf- ficient to find an integer solution. An integer optimal solution is guaranteed for certain network linear programs, as explained in Section 15.5. Even where there is no guarantee, a linear programming solver may happen to find an integer optimal solution for the par- ticular instances of a model in which you are interested. This happened in the solution of the multicommodity transportation model (Figure 4-1) for the particular data that we specified (Figure 4-2). Even if you do not obtain an integer solution from the solver, chances are good that you’ll get a solution in which most of the variables lie at integer values. Specifically, many solvers are able to return an ‘‘extreme’’ solution in which the number of variables not lying at their bounds is at most the number of constraints. If the bounds are integral, all of the variables at their bounds will have integer values; and if the rest of the data is integral, many of the remaining variables may turn out to be integers, too.
    [Show full text]
  • Linear Programming Notes X: Integer Programming
    Linear Programming Notes X: Integer Programming 1 Introduction By now you are familiar with the standard linear programming problem. The assumption that choice variables are infinitely divisible (can be any real number) is unrealistic in many settings. When we asked how many chairs and tables should the profit-maximizing carpenter make, it did not make sense to come up with an answer like “three and one half chairs.” Maybe the carpenter is talented enough to make half a chair (using half the resources needed to make the entire chair), but probably she wouldn’t be able to sell half a chair for half the price of a whole chair. So, sometimes it makes sense to add to a problem the additional constraint that some (or all) of the variables must take on integer values. This leads to the basic formulation. Given c = (c1, . , cn), b = (b1, . , bm), A a matrix with m rows and n columns (and entry aij in row i and column j), and I a subset of {1, . , n}, find x = (x1, . , xn) max c · x subject to Ax ≤ b, x ≥ 0, xj is an integer whenever j ∈ I. (1) What is new? The set I and the constraint that xj is an integer when j ∈ I. Everything else is like a standard linear programming problem. I is the set of components of x that must take on integer values. If I is empty, then the integer programming problem is a linear programming problem. If I is not empty but does not include all of {1, . , n}, then sometimes the problem is called a mixed integer programming problem.
    [Show full text]
  • Chapter 11: Basic Linear Programming Concepts
    CHAPTER 11: BASIC LINEAR PROGRAMMING CONCEPTS Linear programming is a mathematical technique for finding optimal solutions to problems that can be expressed using linear equations and inequalities. If a real-world problem can be represented accurately by the mathematical equations of a linear program, the method will find the best solution to the problem. Of course, few complex real-world problems can be expressed perfectly in terms of a set of linear functions. Nevertheless, linear programs can provide reasonably realistic representations of many real-world problems — especially if a little creativity is applied in the mathematical formulation of the problem. The subject of modeling was briefly discussed in the context of regulation. The regulation problems you learned to solve were very simple mathematical representations of reality. This chapter continues this trek down the modeling path. As we progress, the models will become more mathematical — and more complex. The real world is always more complex than a model. Thus, as we try to represent the real world more accurately, the models we build will inevitably become more complex. You should remember the maxim discussed earlier that a model should only be as complex as is necessary in order to represent the real world problem reasonably well. Therefore, the added complexity introduced by using linear programming should be accompanied by some significant gains in our ability to represent the problem and, hence, in the quality of the solutions that can be obtained. You should ask yourself, as you learn more about linear programming, what the benefits of the technique are and whether they outweigh the additional costs.
    [Show full text]
  • Integer Linear Programming
    Introduction Linear Programming Integer Programming Integer Linear Programming Subhas C. Nandy ([email protected]) Advanced Computing and Microelectronics Unit Indian Statistical Institute Kolkata 700108, India. Introduction Linear Programming Integer Programming Organization 1 Introduction 2 Linear Programming 3 Integer Programming Introduction Linear Programming Integer Programming Linear Programming A technique for optimizing a linear objective function, subject to a set of linear equality and linear inequality constraints. Mathematically, maximize c1x1 + c2x2 + ::: + cnxn Subject to: a11x1 + a12x2 + ::: + a1nxn b1 ≤ a21x1 + a22x2 + ::: + a2nxn b2 : ≤ : am1x1 + am2x2 + ::: + amnxn bm ≤ xi 0 for all i = 1; 2;:::; n. ≥ Introduction Linear Programming Integer Programming Linear Programming In matrix notation, maximize C T X Subject to: AX B X ≤0 where C is a n≥ 1 vector |- cost vector, A is a m× n matrix |- coefficient matrix, B is a m × 1 vector |- requirement vector, and X is an n× 1 vector of unknowns. × He developed it during World War II as a way to plan expenditures and returns so as to reduce costs to the army and increase losses incurred by the enemy. The method was kept secret until 1947 when George B. Dantzig published the simplex method and John von Neumann developed the theory of duality as a linear optimization solution. Dantzig's original example was to find the best assignment of 70 people to 70 jobs subject to constraints. The computing power required to test all the permutations to select the best assignment is vast. However, the theory behind linear programming drastically reduces the number of feasible solutions that must be checked for optimality.
    [Show full text]
  • Chapter 7. Linear Programming and Reductions
    Chapter 7 Linear programming and reductions Many of the problems for which we want algorithms are optimization tasks: the shortest path, the cheapest spanning tree, the longest increasing subsequence, and so on. In such cases, we seek a solution that (1) satisfies certain constraints (for instance, the path must use edges of the graph and lead from s to t, the tree must touch all nodes, the subsequence must be increasing); and (2) is the best possible, with respect to some well-defined criterion, among all solutions that satisfy these constraints. Linear programming describes a broad class of optimization tasks in which both the con- straints and the optimization criterion are linear functions. It turns out an enormous number of problems can be expressed in this way. Given the vastness of its topic, this chapter is divided into several parts, which can be read separately subject to the following dependencies. Flows and matchings Introduction to linear programming Duality Games and reductions Simplex 7.1 An introduction to linear programming In a linear programming problem we are given a set of variables, and we want to assign real values to them so as to (1) satisfy a set of linear equations and/or linear inequalities involving these variables and (2) maximize or minimize a given linear objective function. 201 202 Algorithms Figure 7.1 (a) The feasible region for a linear program. (b) Contour lines of the objective function: x1 + 6x2 = c for different values of the profit c. x x (a) 2 (b) 2 400 400 Optimum point Profit = $1900 ¡ ¡ ¡ ¡ ¡ 300 ¢¡¢¡¢¡¢¡¢ 300 ¡ ¡ ¡ ¡ ¡ ¢¡¢¡¢¡¢¡¢ ¡ ¡ ¡ ¡ ¡ ¢¡¢¡¢¡¢¡¢ 200 200 c = 1500 ¡ ¡ ¡ ¡ ¡ ¢¡¢¡¢¡¢¡¢ ¡ ¡ ¡ ¡ ¡ ¢¡¢¡¢¡¢¡¢ c = 1200 ¡ ¡ ¡ ¡ ¡ 100 ¢¡¢¡¢¡¢¡¢ 100 ¡ ¡ ¡ ¡ ¡ ¢¡¢¡¢¡¢¡¢ c = 600 ¡ ¡ ¡ ¡ ¡ ¢¡¢¡¢¡¢¡¢ x1 x1 0 100 200 300 400 0 100 200 300 400 7.1.1 Example: profit maximization A boutique chocolatier has two products: its flagship assortment of triangular chocolates, called Pyramide, and the more decadent and deluxe Pyramide Nuit.
    [Show full text]
  • Linear Programming
    Stanford University | CS261: Optimization Handout 5 Luca Trevisan January 18, 2011 Lecture 5 In which we introduce linear programming. 1 Linear Programming A linear program is an optimization problem in which we have a collection of variables, which can take real values, and we want to find an assignment of values to the variables that satisfies a given collection of linear inequalities and that maximizes or minimizes a given linear function. (The term programming in linear programming, is not used as in computer program- ming, but as in, e.g., tv programming, to mean planning.) For example, the following is a linear program. maximize x1 + x2 subject to x + 2x ≤ 1 1 2 (1) 2x1 + x2 ≤ 1 x1 ≥ 0 x2 ≥ 0 The linear function that we want to optimize (x1 + x2 in the above example) is called the objective function.A feasible solution is an assignment of values to the variables that satisfies the inequalities. The value that the objective function gives 1 to an assignment is called the cost of the assignment. For example, x1 := 3 and 1 2 x2 := 3 is a feasible solution, of cost 3 . Note that if x1; x2 are values that satisfy the inequalities, then, by summing the first two inequalities, we see that 3x1 + 3x2 ≤ 2 that is, 1 2 x + x ≤ 1 2 3 2 1 1 and so no feasible solution has cost higher than 3 , so the solution x1 := 3 , x2 := 3 is optimal. As we will see in the next lecture, this trick of summing inequalities to verify the optimality of a solution is part of the very general theory of duality of linear programming.
    [Show full text]
  • Implementing Customized Pivot Rules in COIN-OR's CLP with Python
    Cahier du GERAD G-2012-07 Customizing the Solution Process of COIN-OR's Linear Solvers with Python Mehdi Towhidi1;2 and Dominique Orban1;2 ? 1 Department of Mathematics and Industrial Engineering, Ecole´ Polytechnique, Montr´eal,QC, Canada. 2 GERAD, Montr´eal,QC, Canada. [email protected], [email protected] Abstract. Implementations of the Simplex method differ only in very specific aspects such as the pivot rule. Similarly, most relaxation methods for mixed-integer programming differ only in the type of cuts and the exploration of the search tree. Implementing instances of those frame- works would therefore be more efficient if linear and mixed-integer pro- gramming solvers let users customize such aspects easily. We provide a scripting mechanism to easily implement and experiment with pivot rules for the Simplex method by building upon COIN-OR's open-source linear programming package CLP. Our mechanism enables users to implement pivot rules in the Python scripting language without explicitly interact- ing with the underlying C++ layers of CLP. In the same manner, it allows users to customize the solution process while solving mixed-integer linear programs using the CBC and CGL COIN-OR packages. The Cython pro- gramming language ensures communication between Python and COIN- OR libraries and activates user-defined customizations as callbacks. For illustration, we provide an implementation of a well-known pivot rule as well as the positive edge rule|a new rule that is particularly efficient on degenerate problems, and demonstrate how to customize branch-and-cut node selection in the solution of a mixed-integer program.
    [Show full text]
  • IEOR 269, Spring 2010 Integer Programming and Combinatorial Optimization
    IEOR 269, Spring 2010 Integer Programming and Combinatorial Optimization Professor Dorit S. Hochbaum Contents 1 Introduction 1 2 Formulation of some ILP 2 2.1 0-1 knapsack problem . 2 2.2 Assignment problem . 2 3 Non-linear Objective functions 4 3.1 Production problem with set-up costs . 4 3.2 Piecewise linear cost function . 5 3.3 Piecewise linear convex cost function . 6 3.4 Disjunctive constraints . 7 4 Some famous combinatorial problems 7 4.1 Max clique problem . 7 4.2 SAT (satisfiability) . 7 4.3 Vertex cover problem . 7 5 General optimization 8 6 Neighborhood 8 6.1 Exact neighborhood . 8 7 Complexity of algorithms 9 7.1 Finding the maximum element . 9 7.2 0-1 knapsack . 9 7.3 Linear systems . 10 7.4 Linear Programming . 11 8 Some interesting IP formulations 12 8.1 The fixed cost plant location problem . 12 8.2 Minimum/maximum spanning tree (MST) . 12 9 The Minimum Spanning Tree (MST) Problem 13 i IEOR269 notes, Prof. Hochbaum, 2010 ii 10 General Matching Problem 14 10.1 Maximum Matching Problem in Bipartite Graphs . 14 10.2 Maximum Matching Problem in Non-Bipartite Graphs . 15 10.3 Constraint Matrix Analysis for Matching Problems . 16 11 Traveling Salesperson Problem (TSP) 17 11.1 IP Formulation for TSP . 17 12 Discussion of LP-Formulation for MST 18 13 Branch-and-Bound 20 13.1 The Branch-and-Bound technique . 20 13.2 Other Branch-and-Bound techniques . 22 14 Basic graph definitions 23 15 Complexity analysis 24 15.1 Measuring quality of an algorithm .
    [Show full text]
  • A Branch-And-Bound Algorithm for Zero-One Mixed Integer
    A BRANCH-AND-BOUND ALGORITHM FOR ZERO- ONE MIXED INTEGER PROGRAMMING PROBLEMS Ronald E. Davis Stanford University, Stanford, California David A. Kendrick University of Texas, Austin, Texas and Martin Weitzman Yale University, New Haven, Connecticut (Received August 7, 1969) This paper presents the results of experimentation on the development of an efficient branch-and-bound algorithm for the solution of zero-one linear mixed integer programming problems. An implicit enumeration is em- ployed using bounds that are obtained from the fractional variables in the associated linear programming problem. The principal mathematical result used in obtaining these bounds is the piecewise linear convexity of the criterion function with respect to changes of a single variable in the interval [0, 11. A comparison with the computational experience obtained with several other algorithms on a number of problems is included. MANY IMPORTANT practical problems of optimization in manage- ment, economics, and engineering can be posed as so-called 'zero- one mixed integer problems,' i.e., as linear programming problems in which a subset of the variables is constrained to take on only the values zero or one. When indivisibilities, economies of scale, or combinatoric constraints are present, formulation in the mixed-integer mode seems natural. Such problems arise frequently in the contexts of industrial scheduling, investment planning, and regional location, but they are by no means limited to these areas. Unfortunately, at the present time the performance of most compre- hensive algorithms on this class of problems has been disappointing. This study was undertaken in hopes of devising a more satisfactory approach. In this effort we have drawn on the computational experience of, and the concepts employed in, the LAND AND DoIGE161 Healy,[13] and DRIEBEEKt' I algorithms.
    [Show full text]