Functional Description of MINTO, a Mixed Integer Optimizer Version

Total Page:16

File Type:pdf, Size:1020Kb

Functional Description of MINTO, a Mixed Integer Optimizer Version Functional description of MINTO aMixed INTeger Optimizer Version Martin WPSavelsb ergh George L Nemhauser Georgia Institute of Technology School of Industrial and Systems Engineering Atlanta GA USA martinsavelsb erghisyegatechedu georgenemhauserisyegatechedu March Contents Functional description of MINTO a Mixed INTeger Optimizer Version Martin WPSavelsb ergh George L Nemhauser Georgia Institute of Technology School of Industrial and Systems Engineering Atlanta GA USA Abstract MINTO is a software system that solves mixedinteger linear programs by a branchand b ound algorithm with linear programming relaxations It also provides automatic constraint classication prepro cessing primal heuristics and constraint generation Moreover the user can enrich the basic algorithm by providing a variety of sp ecialized application routines that can customize MINTO to achieve maximum eciency for a problem class This pap er do cuments MINTO by sp ecifying what it is capable of doing and how to use it Intro duction MINTO Mixed INTeger Optimizer is a to ol for solving mixed integer linear programming MIP problems of the form X X X max c x c x c x j j j j j j j B j I j C X X X a x a x a x b i m ij j ij j ij j i j B j I j C x j B j l x u j I C xj j xj x IIZ j B I j x IR j C j where B is the set of binary variables I is the set of integer variables C is the set of continuous variables the sense of a constraintcanbe orandthelower and upp er b ounds maybe negative or p ositive innityorany rational numb er A great variety of problems of resource allo cation lo cation distribution pro duction scheduling reliability and design can b e represented by MIP mo dels One reason for this rich mo deling capability is that various nonlinear and nonconvex optimization problems can b e p osed as MIP problems Unfortunately this robust mo deling capability is not supp orted by a comparable algorithmic capability Existing branchandb ound co des for solving MIP problems are far to o limited in the size of problems that can b e solved reliably relative to the size of problems that need to b e solved esp ecially with resp ect to the number of integer variables and they p erform to o slowly for many realtime applications To remedy this situation sp ecial purp ose co des havebeendevelop ed for particular applications and in some cases exp erts have b een able to stretch the capabilities of the general co des with ad ho c approaches But neither of these remedies is satisfactory The rst is very exp ensive and timeconsuming and the second should b e necessary only for a very limited number of instances Our idea of what is needed to solve large mixedinteger programs eciently without having to develop a fullblown sp ecial purp ose co de in each case is an eective general purp ose mixed integer optimizer that can b e customized through the incorp oration of application functions MINTO is such a system Its strength is that it allows users to concentrate on problem sp ecic asp ects rather than data structures and implementation details such as linear programming and branchandb ound The heart of MINTO is a linear programming based branchandb ound algorithm It can b e implemented on top of any LPsolver that provides capabilities to solve and mo dify linear programs and interpret their solutions The currentversion can either b e built on top of the CPLEX callable libraryversion and up on top of Optimization Subroutine Library OSL version and up or on top of XPRESSMP Optimisation Subroutine Library XOSL version and up To b e as eective and ecient as p ossible when used as a general purp ose mixedinteger optimizer MINTO attempts to improve the formulation by prepro cessing and probing construct feasible solutions generate strong valid inequalities perform variable xing based on reduced prices control the size of the linear programs by managing active constraints To b e as exible and p owerful as p ossible when used to build a sp ecial purp ose mixedinteger opti mizer MINTO provides various mechanisms for incorp orating problem sp ecic knowledge Finally to make future algorithmic developments easy to incorp orate MINTOs design is highly mo dular This do cument fo cuses on the mechanisms for incorp orating problem structure and only contains a minimal description of the general purp ose techniques mentioned ab ove The mechanisms for incorp orating problem structure and customizing MINTO are discussed in Sections and under information application control bypassand miscellaneous functions Section explains how to run MINTO and Sections and presenttheoverall system design and a brief description of the system functions Section discusses the developmentof applications that call MINTO recursively Sections and discuss environmentvariables direct access to the linear program programming considerations and computational results Finally Section contains some remarks on availability and future releases Release of MINTO was prompted by the addition of XPRESSMP as an alternativeLP solver a complete rewrite of the prepro cessor the addition of improved and more exible branching schemes and the p ort to Windows NT Running MINTO The following command should b e used to invoke MINTO minto xomtbeE phcik g f r RB sn a name As a default MINTO assumes that the mixed integer program that has to b e solved is sp ecied in MPSformat in a le name mps in the currentworking directory It is also p ossible to generate the mixed integer program directly within MINTO using the application function appl mps see Section However even in that case MINTO requires a name to b e sp ecied on the command line MINTO always op ens a le name l og to write out information that may b e useful to determine the cause of a premature exit in case this o ccurs Furthermore this name will b e used as the problem name whichwould otherwise have b een sp ecied in the MPSle The runtime b ehavior of MINTO dep ends on the command line options The meanings of the various command line options are given in Table The command line options allow the user to deactivate selectively one or more system functions and to sp ecify the amount of output desired option eect x assume maximization problem o level of output m maximum numberofnodestobeevaluated t maximum cpu time in seconds b deactivate b ound improvement e typ e of branching E typ e of no de selection p level of prepro cessing and probing h deactivate primal heuristic c deactivate clique generation i deactivate implication generation k deactivate knapsackcover generation g deactivate GUB cover generation f deactivate owcover generation r deactivate row management R deactivate restarts B typ e of forced branching s deactivate all system functions n activate a names mo de a activate use of advance basis Table Command line options MINTO assumes that the mixed integer program that has to b e solved represents a minimization problem unless the x command line option is sp ecied in which case MINTO assumes it represents a maximization problem Regardless of whether MINTO has found an optimal solution or not it will ab ort after evaluating no des The m command line option can b e used to change the maximum number of no des to b e evaluated Regardless of whether MINTO has found an optimal solution or not it will ab ort after cpu seconds The t command line option can b e used to change the maximum cpu time In default mo de MINTO pro duces very little output The o command line option can b e used to change the amount of output generated by MINTO There are four output levels no very little normal and extensive In default mo de MINTO p erforms prepro cessing and limited probing if the numb er of binary variables is not to o large For a description of prepro cessing and probing and all other system functions see Section The p command line option can b e used to change the amount of prepro cessing and probing done by MINTO There are four levels no prepro cessing and no probing prepro cessing but no probing prepro cessing and limited probing and prepro cessing and extensive probing Probing although p otentially very eective can b e time very consuming esp ecially for problems with many binary variables A branching scheme is sp ecied bytwo rules a branching variable selection rule and a no de selection rule In default mo de MINTO uses its own adapativevariable selection and no de selection rules The e command line option can b e used to sp ecify a branching variable selection rule There are six variable branching selection rules maximum infeasibility p enalty based strong branching pseudo cost based adaptive and SOS branching The E command line option can b e used to sp ecify a no de selection rule There are ve no de selection rules b est b ound depth rst b est pro jection b est estimate and adaptive MINTO attempts to generates cuts to improvetheformulation during the evaluation of a no de As a consequence there is a risk of tailingo Therefore MINTO monitors the progress resulting form cut generation If the amount of progress do es not warrant cut generation anymore MINTO forces branching The B command line option can b e used to sp ecify a forcing strategy There are three strategies no forcing at all no forcing at the ro ot but forcing at the other no des and forcing at all no des In default mo de MINTO do es not asso ciate names with variables and constraints The n command line option can b e used to indicate that an application do es want
Recommended publications
  • How to Use Your Favorite MIP Solver: Modeling, Solving, Cannibalizing
    How to use your favorite MIP Solver: modeling, solving, cannibalizing Andrea Lodi University of Bologna, Italy [email protected] January-February, 2012 @ Universit¨atWien A. Lodi, How to use your favorite MIP Solver Setting • We consider a general Mixed Integer Program in the form: T maxfc x : Ax ≤ b; x ≥ 0; xj 2 Z; 8j 2 Ig (1) where matrix A does not have a special structure. A. Lodi, How to use your favorite MIP Solver 1 Setting • We consider a general Mixed Integer Program in the form: T maxfc x : Ax ≤ b; x ≥ 0; xj 2 Z; 8j 2 Ig (1) where matrix A does not have a special structure. • Thus, the problem is solved through branch-and-bound and the bounds are computed by iteratively solving the LP relaxations through a general-purpose LP solver. A. Lodi, How to use your favorite MIP Solver 1 Setting • We consider a general Mixed Integer Program in the form: T maxfc x : Ax ≤ b; x ≥ 0; xj 2 Z; 8j 2 Ig (1) where matrix A does not have a special structure. • Thus, the problem is solved through branch-and-bound and the bounds are computed by iteratively solving the LP relaxations through a general-purpose LP solver. • The course basically covers the MIP but we will try to discuss when possible how crucial is the LP component (the engine), and how much the whole framework is built on top the capability of effectively solving LPs. • Roughly speaking, using the LP computation as a tool, MIP solvers integrate the branch-and-bound and the cutting plane algorithms through variations of the general branch-and-cut scheme [Padberg & Rinaldi 1987] developed in the context of the Traveling Salesman Problem (TSP).
    [Show full text]
  • Nonlinear Integer Programming ∗
    Nonlinear Integer Programming ∗ Raymond Hemmecke, Matthias Koppe,¨ Jon Lee and Robert Weismantel Abstract. Research efforts of the past fifty years have led to a development of linear integer programming as a mature discipline of mathematical optimization. Such a level of maturity has not been reached when one considers nonlinear systems subject to integrality requirements for the variables. This chapter is dedicated to this topic. The primary goal is a study of a simple version of general nonlinear integer problems, where all constraints are still linear. Our focus is on the computational complexity of the problem, which varies significantly with the type of nonlinear objective function in combination with the underlying combinatorial structure. Nu- merous boundary cases of complexity emerge, which sometimes surprisingly lead even to polynomial time algorithms. We also cover recent successful approaches for more general classes of problems. Though no positive theoretical efficiency results are available, nor are they likely to ever be available, these seem to be the currently most successful and interesting approaches for solving practical problems. It is our belief that the study of algorithms motivated by theoretical considera- tions and those motivated by our desire to solve practical instances should and do inform one another. So it is with this viewpoint that we present the subject, and it is in this direction that we hope to spark further research. Raymond Hemmecke Otto-von-Guericke-Universitat¨ Magdeburg, FMA/IMO, Universitatsplatz¨ 2, 39106 Magdeburg, Germany, e-mail: [email protected] Matthias Koppe¨ University of California, Davis, Dept. of Mathematics, One Shields Avenue, Davis, CA, 95616, USA, e-mail: [email protected] Jon Lee IBM T.J.
    [Show full text]
  • Functional Description of MINTO, a Mixed Integer Optimizer
    Functional description of MINTO, a mixed integer optimizer Citation for published version (APA): Savelsbergh, M. W. P., Sigismondi, G. C., & Nemhauser, G. L. (1991). Functional description of MINTO, a mixed integer optimizer. (1st version ed.) (Memorandum COSOR; Vol. 9117). Technische Universiteit Eindhoven. Document status and date: Published: 01/01/1991 Document Version: Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication: • A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers. Link to publication General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal.
    [Show full text]
  • A Tutorial on Linear and Bilinear Matrix Inequalities
    Journal of Process Control 10 (2000) 363±385 www.elsevier.com/locate/jprocont Review A tutorial on linear and bilinear matrix inequalities Jeremy G. VanAntwerp, Richard D. Braatz* Large Scale Systems Research Laboratory, Department of Chemical Engineering, University of Illinois at Urbana-Champaign, 600 South Mathews Avenue, Box C-3, Urbana, Illinois 61801-3792, USA Abstract This is a tutorial on the mathematical theory and process control applications of linear matrix inequalities (LMIs) and bilinear matrix inequalities (BMIs). Many convex inequalities common in process control applications are shown to be LMIs. Proofs are included to familiarize the reader with the mathematics of LMIs and BMIs. LMIs and BMIs are applied to several important process control applications including control structure selection, robust controller analysis and design, and optimal design of experiments. Software for solving LMI and BMI problems is reviewed. # 2000 Published by Elsevier Science Ltd. All rights reserved. 1. Introduction One of the main reasons for this is that process control engineers are generally unfamiliar with the mathematics A linear matrix inequality (LMI) is a convex con- of LMI/BMIs, and there is no introductory text avail- straint. Consequently, optimization problems with con- able to aid the control engineer in learning these vex objective functions and LMI constraints are mathematics. As of the writing of this paper, the only solvable relatively eciently with o-the-shelf software. text that covers LMIs in any depth is the research The form of an LMI is very general. Linear inequalities, monograph of Boyd and co-workers [22]. Although this convex quadratic inequalities, matrix norm inequalities, monograph is a useful roadmap for locating LMI and various constraints from control theory such as results scattered throughout the electrical engineering Lyapunov and Riccati inequalities can all be written as literature, it is not a textbook for teaching the concepts LMIs.
    [Show full text]
  • Neos & Htcondor: Optimizing Your World
    Optimizing Your neos & World neos: Network-Enabled Optimization System Mathematical Formulation Maximize ∑j∈Vbjyj−α∗∑(i,j)∈Adijxij Subject to exiting node on tour ∑(i,j)∈Axij−yi=0,∀i∈V Subject to entering node on tour ∑(i,j)∈Axij−yi=0,∀j∈V AMPL Model set V; set LINKS := {i in V, j in V: i <> j}; param alpha >= 0; param d{LINKS} >= 0; param b{V} >= 0; #default benefit of visiting a bar neos param c{V} >= 0; # default cost of one drink param B default 30; # default maximum budget for drinks http://www.neos-guide.org/content/bar-crawl-optimization neos & HTCondor neos Workflow Optimization Job Solver Categories bco lp sdp neos co milp sio Results cp minco slp go miocp socp Browser Web kestrel nco uco lno ndo Solver Names AlphaECP csdp LOQO PGAPack ASA ddsip LRAMBO proxy BARON DICOPT MILES PSwarm BDMLP Domino MINLP QSopt_EX Email Email BiqMac DSDP MINOS RELAX4 BLMVM feaspump MINTO SBB Solver Pool bnbs FilMINT MOSEK scip at UW-Madison Bonmin filter MUSCOD-II SD bpmpd filterMPEC NLPEC SDPA Cbc Gurobi NMTR SDPLR Clp icos nsips SDPT3 concorde Ipopt OOQP SeDuMi Custom CONDOR KNITRO PATH SNOPT CONOPT LANCELOT PATHNLP SYMPHONY Couenne L-BFGS-B PENBMI TRON XMLRPC CPLEX LINDOGlobal PENSDP XpressMP Off-Site Solvers Argonne National Lab Solver Inputs Arizona State University AMPL jpg OSIL SPARSE Kestrel C LP RELAX4 SPARSE_SDPA (AMPL or GAMS) University of Klagenfurt CPLEX MATLAB_BINARY SDPA TSP University of Minho Fortran MOSEL SDPLR ZIMPL GAMS MPS SMPS Jobs Per Month 140000 120000 100000 80000 Minho 60000 Klagenfurt Arizona State 40000 Argonne 20000
    [Show full text]
  • Domain Reduction Techniques for Global NLP and MINLP Optimization 2
    Domain reduction techniques for global NLP and MINLP optimization Y. Puranik1 and Nikolaos V. Sahinidis1 1Department of Chemical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA Abstract Optimization solvers routinely utilize presolve techniques, including model simplification, reformulation and domain reduction techniques. Domain reduction techniques are especially important in speeding up convergence to the global optimum for challenging nonconvex nonlinear programming (NLP) and mixed- integer nonlinear programming (MINLP) optimization problems. In this work, we survey the various techniques used for domain reduction of NLP and MINLP optimization problems. We also present a computational analysis of the impact of these techniques on the performance of various widely available global solvers on a collection of 1740 test problems. Keywords: Constraint propagation; feasibility-based bounds tightening; optimality-based bounds tightening; domain reduction 1 Introduction combinatorial complexity introduced by the integer variables, nonconvexities in objective function or We consider the following mixed-integer nonlinear the feasible region lead to multiple local minima programming problem (MINLP): and provide a challenge to the optimization of such problems. min f(~x) Branch-and-bound [138] based methods can be s.t. g(~~x) ≤ 0 exploited to solve Problem 1 to global optimal- (1) ~xl ≤ ~x ≤ ~xu ity. Inspired by branch-and-bound for discrete Rn−m Zm ~x ∈ × programming problems [109], branch-and-bound was arXiv:1706.08601v1 [cs.DS] 26 Jun 2017 adapted for continuous problems by Falk and MINLP is a very general representation for opti- Soland [58]. The algorithm proceeds by bounding mization problems and includes linear programming the global optimum by a valid lower and upper (LP), mixed-integer linear programming (MIP) and bound throughout the search process.
    [Show full text]
  • Nonlinear Integer Programming ∗
    Nonlinear Integer Programming ∗ Raymond Hemmecke, Matthias Koppe,¨ Jon Lee and Robert Weismantel Abstract. Research efforts of the past fifty years have led to a development of linear integer programming as a mature discipline of mathematical optimization. Such a level of maturity has not been reached when one considers nonlinear systems subject to integrality requirements for the variables. This chapter is dedicated to this topic. The primary goal is a study of a simple version of general nonlinear integer problems, where all constraints are still linear. Our focus is on the computational complexity of the problem, which varies significantly with the type of nonlinear objective function in combination with the underlying combinatorial structure. Nu- merous boundary cases of complexity emerge, which sometimes surprisingly lead even to polynomial time algorithms. We also cover recent successful approaches for more general classes of problems. Though no positive theoretical efficiency results are available, nor are they likely to ever be available, these seem to be the currently most successful and interesting approaches for solving practical problems. It is our belief that the study of algorithms motivated by theoretical considera- tions and those motivated by our desire to solve practical instances should and do inform one another. So it is with this viewpoint that we present the subject, and it is in this direction that we hope to spark further research. Raymond Hemmecke Otto-von-Guericke-Universitat¨ Magdeburg, FMA/IMO, Universitatsplatz¨ 2, 39106 Magdeburg, Germany, e-mail: [email protected] Matthias Koppe¨ University of California, Davis, Dept. of Mathematics, One Shields Avenue, Davis, CA, 95616, USA, e-mail: [email protected] Jon Lee IBM T.J.
    [Show full text]
  • Open Source and Free Software
    WP. 24 ENGLISH ONLY UNITED NATIONS STATISTICAL COMMISSION and EUROPEAN COMMISSION ECONOMIC COMMISSION FOR EUROPE STATISTICAL OFFICE OF THE CONFERENCE OF EUROPEAN STATISTICIANS EUROPEAN COMMUNITIES (EUROSTAT) Joint UNECE/Eurostat work session on statistical data confidentiality (Bilbao, Spain, 2-4 December 2009) Topic (iv): Tools and software improvements ON OPEN SOURCE SOFTWARE FOR STATISTICAL DISCLOSURE LIMITATION Invited Paper Prepared by Juan José Salazar González, University of La Laguna, Spain On open source software for Statistical Disclosure Limitation Juan José Salazar González * * Department of Statistics, Operations Research and Computer Science, University of La Laguna, 38271 Tenerife, Spain, e-mail: [email protected] Abstract: Much effort has been done in the last years to design, analyze, implement and compare different methodologies to ensure confidentiality during data publication. The knowledge of these methods is public, but the practical implementations are subject to different license constraints. This paper summarizes some concepts on free and open source software, and gives some light on the convenience of using these schemes when building automatic tools in Statistical Disclosure Limitation. Our results are based on some computational experiments comparing different mathematical programming solvers when applying controlled rounding to tabular data. 1 Introduction Statistical agencies must guarantee confidentiality of data respondent by using the best of modern technology. Today data snoopers may have powerful computers, thus statistical agencies need sophisticated computer programs to ensure that released information is protected against attackers. Recent research has designed different optimization techniques that ensure protection. Unfortunately the number of users of these techniques is quite small (mainly national and regional statistical agencies), thus there is not enough market to stimulate the development of several implementations, all competing for being the best.
    [Show full text]
  • CME 338 Large-Scale Numerical Optimization Notes 2
    Stanford University, ICME CME 338 Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2019 Notes 2: Overview of Optimization Software 1 Optimization problems We study optimization problems involving linear and nonlinear constraints: NP minimize φ(x) n x2R 0 x 1 subject to ` ≤ @ Ax A ≤ u; c(x) where φ(x) is a linear or nonlinear objective function, A is a sparse matrix, c(x) is a vector of nonlinear constraint functions ci(x), and ` and u are vectors of lower and upper bounds. We assume the functions φ(x) and ci(x) are smooth: they are continuous and have continuous first derivatives (gradients). Sometimes gradients are not available (or too expensive) and we use finite difference approximations. Sometimes we need second derivatives. We study algorithms that find a local optimum for problem NP. Some examples follow. If there are many local optima, the starting point is important. x LP Linear Programming min cTx subject to ` ≤ ≤ u Ax MINOS, SNOPT, SQOPT LSSOL, QPOPT, NPSOL (dense) CPLEX, Gurobi, LOQO, HOPDM, MOSEK, XPRESS CLP, lp solve, SoPlex (open source solvers [7, 34, 57]) x QP Quadratic Programming min cTx + 1 xTHx subject to ` ≤ ≤ u 2 Ax MINOS, SQOPT, SNOPT, QPBLUR LSSOL (H = BTB, least squares), QPOPT (H indefinite) CLP, CPLEX, Gurobi, LANCELOT, LOQO, MOSEK BC Bound Constraints min φ(x) subject to ` ≤ x ≤ u MINOS, SNOPT LANCELOT, L-BFGS-B x LC Linear Constraints min φ(x) subject to ` ≤ ≤ u Ax MINOS, SNOPT, NPSOL 0 x 1 NC Nonlinear Constraints min φ(x) subject to ` ≤ @ Ax A ≤ u MINOS, SNOPT, NPSOL c(x) CONOPT, LANCELOT Filter, KNITRO, LOQO (second derivatives) IPOPT (open source solver [30]) Algorithms for finding local optima are used to construct algorithms for more complex optimization problems: stochastic, nonsmooth, global, mixed integer.
    [Show full text]
  • SYMPHONY 5.6.9 User's Manual
    SYMPHONY 5.6.9 User's Manual 1 T.K. Ralphs2 M. Guzelsoy3 A. Mahajan4 March 10, 2015 1This research was partially supported by NSF Grants DMS-9527124, DMI-0534862, DMI-0522796, CMMI-0728011, CMMI-1130914, as well as Texas ATP Grant 97-3604-010. A revised version of Chapters 4 of this manual now appears in the Springer-Verlag book Computational Combinatorial Optimization edited by M. J¨ungerand D. Naddef, see http://link.springer.de/link/service/series/0558/tocs/t2241.htm 2Department of Industrial and Systems Engineering, Lehigh University, Bethlehem, PA 18017, [email protected], http://coral.ielehigh.edu/~ted 3SAS Institute, Cary, NC [email protected] 4Department of Industrial Engineering and Operations Research, IIT Bombay, India [email protected], http://www.ieor.iitb.ac.in/amahajan c 2000-2015 Ted Ralphs Acknowledgments First and foremost, many thanks are due to Laci Lad´anyi who worked with me on the development of a very early precursor of SYMPHONY called COMPSys many years ago now and who taught me much of what I then knew about programming. Thanks are due also to Marta Es¨o,who wrote an early draft of this manual for what was then COMPSys. This release would not have been possible without the help of both Menal G¨uzelsoy, who has been instrumental in the development of SYMPHONY since version 4.0, and Ashutosh Mahajan, who has worked on SYMPHONY since version 5.0. In particular, Ashutosh and Menal did all of the work that went into improving SYMPHONY for release 5.2. I would also like to thank Matthew Galati and Ondrej Medek, who contributed to the development of SYMPHONY over the years.
    [Show full text]
  • A MINTO Short Course 1
    A MINTO short course Martin WPSavelsb ergh George L Nemhauser Georgia Institute of Technology School of Industrial and Systems Engineering Atlanta GA February Intro duction MINTO is a software system that solves mixedinteger linear programs by a branchand b ound algorithm with linear programming relaxations It also provides automatic con straint classication prepro cessing primal heuristics and constraint generation More over the user can enrich the basic algorithm by providing a variety of sp ecialized appli cation routines that can customize MINTO to achievemaximum eciency for a problem class An overview of MINTO discussing the design philosophy and general features can b e found in Nemhauser Savelsb ergh and Sigismondi a detailed description of the customization options can b e found in Savelsb ergh and Nemhauser and an indepth presentation of some of the system functions of MINTO can b e found in Savelsb ergh and Gu Nemhauser and Savelsb ergh a b a b c General references on mixedinteger linear programming are Schrijver and Nemhauser and Wolsey This short course illustrates many of MINTOs capabilities and teaches the basic MINTO customization pro cess through a set of exercises Most exercises require the development of one or more application functions in order to customize MINTO so as to b e more eective on a class of problems The exercises are presented in order of increasing diculty This short course can b e used to complement a standard course on integer programming Because MINTO always works with a maximization problem
    [Show full text]
  • MINLP Solver Software
    MINLP Solver Software Michael R. Bussieck and Stefan Vigerske GAMS Development Corp., 1217 Potomac St, NW Washington, DC 20007, USA [email protected], [email protected] March 10, 2014 Abstract In this article we give a brief overview of the start-of-the-art in software for the solution of mixed integer nonlinear programs (MINLP). We establish several groupings with respect to various features and give concise individual descriptions for each solver. The provided information may guide the selection of a best solver for a particular MINLP problem. Keywords: mixed integer nonlinear programming, solver, software, MINLP, MIQCP 1 Introduction The general form of an MINLP is minimize f(x; y) subject to g(x; y) ≤ 0 (P) x 2 X y 2 Y integer n+s n+s m The function f : R ! R is a possibly nonlinear objective function and g : R ! R a possibly nonlinear constraint function. Most algorithms require the functions f and g to be continuous and differentiable, but some may even allow for singular discontinuities. The variables x and y are the decision variables, where y is required to be integer valued. The n s sets X ⊆ R and Y ⊆ R are bounding-box-type restrictions on the variables. Additionally to integer requirements on variables, other kinds of discrete constraints are commonly used. These are, e.g., special-ordered-set constraints (only one (SOS type 1) or two consecutive (SOS type 2) variables in an (ordered) set are allowed to be nonzero) [8], semicontinuous variables (the variable is allowed to take either the value zero or a value above some bound), semiinteger variables (like semicontinuous variables, but with an additional integer restric- tion), and indicator variables (a binary variable indicates whether a certain set of constraints has to be enforced).
    [Show full text]