A Comparison of Methods for Solving MAX-SAT Problems Steve Joy

Total Page:16

File Type:pdf, Size:1020Kb

A Comparison of Methods for Solving MAX-SAT Problems Steve Joy A Comparison of Metho ds for Solving MAX-SAT Problems SteveJoy, John E. Mitchell Mathematical Sciences, Rensselaer Polytechnic Institute, Troy NY 12180 USA. email: [email protected], [email protected] http://www.math.rpi.edu/~mitchj Brian Borchers Mathematical Sciences, New Mexico Tech, So corro, NM 87801 USA email: b [email protected] http://www.nmtech.edu/~b orchers Seattle INFORMS Conference Octob er 25{28, 1998 Session MD29 Seattle INFORMS MD29 1 Abstract We compare the p erformance of 3 approaches to the solu- tion of MAX-SAT problems, including a version of the Davis- Putnam-Loveland algorithm extended to solve MAX-SAT, an integer programming branch-and-cut algorithm and an algo- rithm for MAX-2-SAT problems based on a semide nite pro- gramming relation. 1 Overview De nition of MAXSAT. Expressing MAXSATasaninteger program. Cutting planes. Extended Davis-Putnam-Loveland algorithm. Semide nite programming relaxation of MAX2SAT. Computational results. Conclusions. Seattle INFORMS MD29 2 2 De nition of MAXSAT A prop ositional logic formula F can always b e written in conjunctive normal form with m clauses and n vari- ables. Each clause is a disjunction of a set of literals. Each literal is either a variable or the negation of a vari- able. The formula is the conjunction of the clauses. For example: F =X _ X ^ X _ X _ X ^ X _ X : 1 2 1 2 3 2 3 The satis ability problem, SAT: Given a prop ositional logic formula F in conjunctive normal form, is there an assignment of the logical variables that makes F true? The maximum satis ability problem, MAXSAT: Given a prop ositional logic formula F in conjunc- tive normal form, nd an assignment of the logical variables that maximizes the numb er of clauses in F that are true. Seattle INFORMS MD29 3 Computational complexity SAT is NP-Complete. Even if each clause has exactly three literals,SAT is still NP-complete. Some sp ecial cases are solvable in p olynomial time, includ- ing 2-SAT and HORN-SAT. MAXSAT is NP-Hard even if each clause has only two literals. There is no p olynomial time approximation scheme PTAS for MAXSAT unless P=NP. It is p ossible to approximate MAXSAT to within a factor of 1.32 and MAX2SAT to within a factor of 1.14 in p oly- nomial time, using a semide nite programming approach Go emans and Williamson, 1995. Seattle INFORMS MD29 4 3 An integer programming approach Anyweighted MAXSAT instance can b e expressed as an equivalentinteger programming problem. For example: C = v _ v weight1 1 1 2 C = v _ v _ v weight4 2 1 2 3 C = v _ v weight3 3 1 2 Equivalent to: min w + 4w + 3w 1 2 3 s:t: 1 x + 1 x + w 1 1 2 1 x + x + x + w 1 1 2 3 2 x + 1 x + w 1 1 2 3 x = 0 or 1;i =1;:::;3; w = 0 or 1;i =1;:::;3 i i We call the added variables w ;:::;w weighted vari- 1 3 ables. Seattle INFORMS MD29 5 Branch-and-cut Use a primal heuristic similar to GSAT Selman et al., 1992, 1996 or the heuristics of Gu 1992,1993,1994 to nd a go o d solution. Solve the linear programming relaxation obtained by requiring 0 x 1 for i =1;:::;n,0 w 1 for i i j =1;:::;m. If the solution to the relaxation is non-integral, mo dify it by adding cutting planes,or branch. Use three di erenttyp es of cutting planes: { Resolution cuts Ho oker, 1988. { Odd cycle inequalities Cheriyan et al., 1996.. { Clique cuts. The cuts are added lo cally,sowe don't lift them to b e valid throughout the branch-and-cut tree. Branching: split the relaxation into two problems, in one of which a sp eci c variable is required to equal zero and in the other it is required to equal one. The algorithm is written in C. It uses MINTO to handle the branch-and-cut tree, which uses CPLEX 4.0 to solve the linear programming relaxations. Seattle INFORMS MD29 6 Clique cuts Motivated by pigeonhole problems Ho oker: holen asks if it is p ossible to place n + 1 pigeons in n holes without two pigeons b eing in the same hole. We describ e these cuts for 2-SAT problems: Example: Assume our instance of 2SAT contains the four literals l ;l ;l ;l and the six clauses: 1 2 3 4 l _ l ;l _ l ;l _ l ;l _ l ;l _ l ;l _ l : 1 2 1 3 1 4 2 3 2 4 3 4 We can think of this as giving a clique, with vertices la- b elled by the four literals, and edges given by the six clauses. Any satisfying truth assignmentmust have at least three of these literals true. This gives a p otential cutting plane. This can b e extended to MAX2SAT and general MAXSAT by regarding some edges in the clique as missing and mo d- ifying the cutting plane appropriately. This is really a lift- ing pro cedure. We seek inequalities that are violated by at least 0.1. The separation routine lo oks for literals with large values and tries to nd cliques involving suchvariables. The cliques are built vertex-by-vertex. We reject inequalities that contain to o manyweighted variables. Seattle INFORMS MD29 7 Resolution cuts A classical logical pro cedure. Also implemented in branch-and-cut byHooker and Fedjki, 1990. Example: Given two clauses, one containing a variable and the other containing its complement: x _ x _ x _ w 1 3 7 9 x _ x _ w x _ 3 7 11 2 we can resolve to generate the following: x _ x _ x _ w _ w 1 2 7 9 11 Ho oker showed that this is equivalenttoa Chvatal-Gomory cutting plane pro cedure in the integer programming setting. Our resolution routine consists of a sequence of passes, each designed to generate resolvents satisfying certain criteria. Only accept resolved clauses that are unit clauses and that are violated by at least 0.3. Seattle INFORMS MD29 8 Odd cycle inequalities Derived by Cheriyan et al., 1996, motivated by similar in- equalities for MAXCUT problems. Example: x +x +w +w 1 1 2 10 11 +x x +w 0 2 3 11 x x +w 1 3 4 12 x x +w 1 1 4 13 The rst, third, and fourth constraints are o dd; the second is even. Adding these constraints together,dividing by2 and rounding co ecients we obtain: x x x + w + w + w + w 0: 2 3 4 10 11 12 13 Use a mo di ed version of Dijkstra's algorithm to nd cut- ting planes. Lo ok for cuts violated byat least 0.3, and of length at most 2. Odd cycle cuts combine together clauses of length 2. Thus, only e ective if the current relaxation contains a large numb er of clauses of length 2, eg, MAX2SAT problems. For problems with longer clauses, can only use these cuts deep in the tree. Seattle INFORMS MD29 9 Branching strategy After generating cuts and applying b ounds, we branchif there are still fractional variables in the LP solution. The branching scheme we use is a mo di cation of the Jeroslow- Wang 1990 scheme. Jeroslow-Wang examines the probability of satisfying all the constraints if we randomly assign values to the remain- ing un xed variables. It attempts to picka variable whose assignment will have the greatest change in this probability. Our approach di ers from Jeroslow-Wang in that: 1. We give considerably less weightto singleton clauses. 2. We consider the e ect on b oth branches. Seattle INFORMS MD29 10 4 Extended Davis-Putnam-Loveland Davis-Putnam-Loveland 1960,1978 is a classical branch- ing and backtracking pro cedure for solving SAT. Borchers and Furman 1995 extended this to solve MAXSAT problems. Also done indep endently byWallace and Freuder 1996. We maintain an upp er b ound, ub, given by the number of unsatis ed clauses in the b est known solution. Wekeep trackofunsat, the numb er of clauses left unsat- is ed by the current partial solution. If unsat > ub, backtrack from the current partial solu- tion. Written in C. Co de uses primal heuristic to nd go o d solution. Incorp orates unit clause tracking when unsat = ub 1. Uses the variable selection rule of Dub ois et al.. Seattle INFORMS MD29 11 5 Semide nite programming This formulation is due to Go emans and Williamson, 1995. MAX2SATasaQP For each logical variable y ,intro duce a 1 variable x . i i Add a variable x . We'll set y =TRUE if x = x , and 0 i 0 i FALSE otherwise. Let v y _ y :=0:251 + x x +1 + x x +1 x x : k l 0 k 0 l k l We obtain v y _ y = 0 if the clause is FALSE, and 1 k l otherwise. Similar formulas can b e constructed for clauses involving negated literals. The MAX2SAT problem b ecomes: m X maxf v C :x 2f1; 1g;i =0;:::;ng: j i j =1 Seattle INFORMS MD29 12 Formulation as an SDP Express QP as: T maxfx Ax + c : x 2f1; 1g;i =0;:::;ng i for a symmetric matrix A and a scalar c. T T Note that x Ax = tr XA, where X = xx and tr de- notes the trace of a matrix.
Recommended publications
  • How to Use Your Favorite MIP Solver: Modeling, Solving, Cannibalizing
    How to use your favorite MIP Solver: modeling, solving, cannibalizing Andrea Lodi University of Bologna, Italy [email protected] January-February, 2012 @ Universit¨atWien A. Lodi, How to use your favorite MIP Solver Setting • We consider a general Mixed Integer Program in the form: T maxfc x : Ax ≤ b; x ≥ 0; xj 2 Z; 8j 2 Ig (1) where matrix A does not have a special structure. A. Lodi, How to use your favorite MIP Solver 1 Setting • We consider a general Mixed Integer Program in the form: T maxfc x : Ax ≤ b; x ≥ 0; xj 2 Z; 8j 2 Ig (1) where matrix A does not have a special structure. • Thus, the problem is solved through branch-and-bound and the bounds are computed by iteratively solving the LP relaxations through a general-purpose LP solver. A. Lodi, How to use your favorite MIP Solver 1 Setting • We consider a general Mixed Integer Program in the form: T maxfc x : Ax ≤ b; x ≥ 0; xj 2 Z; 8j 2 Ig (1) where matrix A does not have a special structure. • Thus, the problem is solved through branch-and-bound and the bounds are computed by iteratively solving the LP relaxations through a general-purpose LP solver. • The course basically covers the MIP but we will try to discuss when possible how crucial is the LP component (the engine), and how much the whole framework is built on top the capability of effectively solving LPs. • Roughly speaking, using the LP computation as a tool, MIP solvers integrate the branch-and-bound and the cutting plane algorithms through variations of the general branch-and-cut scheme [Padberg & Rinaldi 1987] developed in the context of the Traveling Salesman Problem (TSP).
    [Show full text]
  • Nonlinear Integer Programming ∗
    Nonlinear Integer Programming ∗ Raymond Hemmecke, Matthias Koppe,¨ Jon Lee and Robert Weismantel Abstract. Research efforts of the past fifty years have led to a development of linear integer programming as a mature discipline of mathematical optimization. Such a level of maturity has not been reached when one considers nonlinear systems subject to integrality requirements for the variables. This chapter is dedicated to this topic. The primary goal is a study of a simple version of general nonlinear integer problems, where all constraints are still linear. Our focus is on the computational complexity of the problem, which varies significantly with the type of nonlinear objective function in combination with the underlying combinatorial structure. Nu- merous boundary cases of complexity emerge, which sometimes surprisingly lead even to polynomial time algorithms. We also cover recent successful approaches for more general classes of problems. Though no positive theoretical efficiency results are available, nor are they likely to ever be available, these seem to be the currently most successful and interesting approaches for solving practical problems. It is our belief that the study of algorithms motivated by theoretical considera- tions and those motivated by our desire to solve practical instances should and do inform one another. So it is with this viewpoint that we present the subject, and it is in this direction that we hope to spark further research. Raymond Hemmecke Otto-von-Guericke-Universitat¨ Magdeburg, FMA/IMO, Universitatsplatz¨ 2, 39106 Magdeburg, Germany, e-mail: [email protected] Matthias Koppe¨ University of California, Davis, Dept. of Mathematics, One Shields Avenue, Davis, CA, 95616, USA, e-mail: [email protected] Jon Lee IBM T.J.
    [Show full text]
  • Functional Description of MINTO, a Mixed Integer Optimizer
    Functional description of MINTO, a mixed integer optimizer Citation for published version (APA): Savelsbergh, M. W. P., Sigismondi, G. C., & Nemhauser, G. L. (1991). Functional description of MINTO, a mixed integer optimizer. (1st version ed.) (Memorandum COSOR; Vol. 9117). Technische Universiteit Eindhoven. Document status and date: Published: 01/01/1991 Document Version: Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication: • A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers. Link to publication General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal.
    [Show full text]
  • A Tutorial on Linear and Bilinear Matrix Inequalities
    Journal of Process Control 10 (2000) 363±385 www.elsevier.com/locate/jprocont Review A tutorial on linear and bilinear matrix inequalities Jeremy G. VanAntwerp, Richard D. Braatz* Large Scale Systems Research Laboratory, Department of Chemical Engineering, University of Illinois at Urbana-Champaign, 600 South Mathews Avenue, Box C-3, Urbana, Illinois 61801-3792, USA Abstract This is a tutorial on the mathematical theory and process control applications of linear matrix inequalities (LMIs) and bilinear matrix inequalities (BMIs). Many convex inequalities common in process control applications are shown to be LMIs. Proofs are included to familiarize the reader with the mathematics of LMIs and BMIs. LMIs and BMIs are applied to several important process control applications including control structure selection, robust controller analysis and design, and optimal design of experiments. Software for solving LMI and BMI problems is reviewed. # 2000 Published by Elsevier Science Ltd. All rights reserved. 1. Introduction One of the main reasons for this is that process control engineers are generally unfamiliar with the mathematics A linear matrix inequality (LMI) is a convex con- of LMI/BMIs, and there is no introductory text avail- straint. Consequently, optimization problems with con- able to aid the control engineer in learning these vex objective functions and LMI constraints are mathematics. As of the writing of this paper, the only solvable relatively eciently with o-the-shelf software. text that covers LMIs in any depth is the research The form of an LMI is very general. Linear inequalities, monograph of Boyd and co-workers [22]. Although this convex quadratic inequalities, matrix norm inequalities, monograph is a useful roadmap for locating LMI and various constraints from control theory such as results scattered throughout the electrical engineering Lyapunov and Riccati inequalities can all be written as literature, it is not a textbook for teaching the concepts LMIs.
    [Show full text]
  • Neos & Htcondor: Optimizing Your World
    Optimizing Your neos & World neos: Network-Enabled Optimization System Mathematical Formulation Maximize ∑j∈Vbjyj−α∗∑(i,j)∈Adijxij Subject to exiting node on tour ∑(i,j)∈Axij−yi=0,∀i∈V Subject to entering node on tour ∑(i,j)∈Axij−yi=0,∀j∈V AMPL Model set V; set LINKS := {i in V, j in V: i <> j}; param alpha >= 0; param d{LINKS} >= 0; param b{V} >= 0; #default benefit of visiting a bar neos param c{V} >= 0; # default cost of one drink param B default 30; # default maximum budget for drinks http://www.neos-guide.org/content/bar-crawl-optimization neos & HTCondor neos Workflow Optimization Job Solver Categories bco lp sdp neos co milp sio Results cp minco slp go miocp socp Browser Web kestrel nco uco lno ndo Solver Names AlphaECP csdp LOQO PGAPack ASA ddsip LRAMBO proxy BARON DICOPT MILES PSwarm BDMLP Domino MINLP QSopt_EX Email Email BiqMac DSDP MINOS RELAX4 BLMVM feaspump MINTO SBB Solver Pool bnbs FilMINT MOSEK scip at UW-Madison Bonmin filter MUSCOD-II SD bpmpd filterMPEC NLPEC SDPA Cbc Gurobi NMTR SDPLR Clp icos nsips SDPT3 concorde Ipopt OOQP SeDuMi Custom CONDOR KNITRO PATH SNOPT CONOPT LANCELOT PATHNLP SYMPHONY Couenne L-BFGS-B PENBMI TRON XMLRPC CPLEX LINDOGlobal PENSDP XpressMP Off-Site Solvers Argonne National Lab Solver Inputs Arizona State University AMPL jpg OSIL SPARSE Kestrel C LP RELAX4 SPARSE_SDPA (AMPL or GAMS) University of Klagenfurt CPLEX MATLAB_BINARY SDPA TSP University of Minho Fortran MOSEL SDPLR ZIMPL GAMS MPS SMPS Jobs Per Month 140000 120000 100000 80000 Minho 60000 Klagenfurt Arizona State 40000 Argonne 20000
    [Show full text]
  • Domain Reduction Techniques for Global NLP and MINLP Optimization 2
    Domain reduction techniques for global NLP and MINLP optimization Y. Puranik1 and Nikolaos V. Sahinidis1 1Department of Chemical Engineering, Carnegie Mellon University, Pittsburgh, PA 15213, USA Abstract Optimization solvers routinely utilize presolve techniques, including model simplification, reformulation and domain reduction techniques. Domain reduction techniques are especially important in speeding up convergence to the global optimum for challenging nonconvex nonlinear programming (NLP) and mixed- integer nonlinear programming (MINLP) optimization problems. In this work, we survey the various techniques used for domain reduction of NLP and MINLP optimization problems. We also present a computational analysis of the impact of these techniques on the performance of various widely available global solvers on a collection of 1740 test problems. Keywords: Constraint propagation; feasibility-based bounds tightening; optimality-based bounds tightening; domain reduction 1 Introduction combinatorial complexity introduced by the integer variables, nonconvexities in objective function or We consider the following mixed-integer nonlinear the feasible region lead to multiple local minima programming problem (MINLP): and provide a challenge to the optimization of such problems. min f(~x) Branch-and-bound [138] based methods can be s.t. g(~~x) ≤ 0 exploited to solve Problem 1 to global optimal- (1) ~xl ≤ ~x ≤ ~xu ity. Inspired by branch-and-bound for discrete Rn−m Zm ~x ∈ × programming problems [109], branch-and-bound was arXiv:1706.08601v1 [cs.DS] 26 Jun 2017 adapted for continuous problems by Falk and MINLP is a very general representation for opti- Soland [58]. The algorithm proceeds by bounding mization problems and includes linear programming the global optimum by a valid lower and upper (LP), mixed-integer linear programming (MIP) and bound throughout the search process.
    [Show full text]
  • Nonlinear Integer Programming ∗
    Nonlinear Integer Programming ∗ Raymond Hemmecke, Matthias Koppe,¨ Jon Lee and Robert Weismantel Abstract. Research efforts of the past fifty years have led to a development of linear integer programming as a mature discipline of mathematical optimization. Such a level of maturity has not been reached when one considers nonlinear systems subject to integrality requirements for the variables. This chapter is dedicated to this topic. The primary goal is a study of a simple version of general nonlinear integer problems, where all constraints are still linear. Our focus is on the computational complexity of the problem, which varies significantly with the type of nonlinear objective function in combination with the underlying combinatorial structure. Nu- merous boundary cases of complexity emerge, which sometimes surprisingly lead even to polynomial time algorithms. We also cover recent successful approaches for more general classes of problems. Though no positive theoretical efficiency results are available, nor are they likely to ever be available, these seem to be the currently most successful and interesting approaches for solving practical problems. It is our belief that the study of algorithms motivated by theoretical considera- tions and those motivated by our desire to solve practical instances should and do inform one another. So it is with this viewpoint that we present the subject, and it is in this direction that we hope to spark further research. Raymond Hemmecke Otto-von-Guericke-Universitat¨ Magdeburg, FMA/IMO, Universitatsplatz¨ 2, 39106 Magdeburg, Germany, e-mail: [email protected] Matthias Koppe¨ University of California, Davis, Dept. of Mathematics, One Shields Avenue, Davis, CA, 95616, USA, e-mail: [email protected] Jon Lee IBM T.J.
    [Show full text]
  • Open Source and Free Software
    WP. 24 ENGLISH ONLY UNITED NATIONS STATISTICAL COMMISSION and EUROPEAN COMMISSION ECONOMIC COMMISSION FOR EUROPE STATISTICAL OFFICE OF THE CONFERENCE OF EUROPEAN STATISTICIANS EUROPEAN COMMUNITIES (EUROSTAT) Joint UNECE/Eurostat work session on statistical data confidentiality (Bilbao, Spain, 2-4 December 2009) Topic (iv): Tools and software improvements ON OPEN SOURCE SOFTWARE FOR STATISTICAL DISCLOSURE LIMITATION Invited Paper Prepared by Juan José Salazar González, University of La Laguna, Spain On open source software for Statistical Disclosure Limitation Juan José Salazar González * * Department of Statistics, Operations Research and Computer Science, University of La Laguna, 38271 Tenerife, Spain, e-mail: [email protected] Abstract: Much effort has been done in the last years to design, analyze, implement and compare different methodologies to ensure confidentiality during data publication. The knowledge of these methods is public, but the practical implementations are subject to different license constraints. This paper summarizes some concepts on free and open source software, and gives some light on the convenience of using these schemes when building automatic tools in Statistical Disclosure Limitation. Our results are based on some computational experiments comparing different mathematical programming solvers when applying controlled rounding to tabular data. 1 Introduction Statistical agencies must guarantee confidentiality of data respondent by using the best of modern technology. Today data snoopers may have powerful computers, thus statistical agencies need sophisticated computer programs to ensure that released information is protected against attackers. Recent research has designed different optimization techniques that ensure protection. Unfortunately the number of users of these techniques is quite small (mainly national and regional statistical agencies), thus there is not enough market to stimulate the development of several implementations, all competing for being the best.
    [Show full text]
  • CME 338 Large-Scale Numerical Optimization Notes 2
    Stanford University, ICME CME 338 Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2019 Notes 2: Overview of Optimization Software 1 Optimization problems We study optimization problems involving linear and nonlinear constraints: NP minimize φ(x) n x2R 0 x 1 subject to ` ≤ @ Ax A ≤ u; c(x) where φ(x) is a linear or nonlinear objective function, A is a sparse matrix, c(x) is a vector of nonlinear constraint functions ci(x), and ` and u are vectors of lower and upper bounds. We assume the functions φ(x) and ci(x) are smooth: they are continuous and have continuous first derivatives (gradients). Sometimes gradients are not available (or too expensive) and we use finite difference approximations. Sometimes we need second derivatives. We study algorithms that find a local optimum for problem NP. Some examples follow. If there are many local optima, the starting point is important. x LP Linear Programming min cTx subject to ` ≤ ≤ u Ax MINOS, SNOPT, SQOPT LSSOL, QPOPT, NPSOL (dense) CPLEX, Gurobi, LOQO, HOPDM, MOSEK, XPRESS CLP, lp solve, SoPlex (open source solvers [7, 34, 57]) x QP Quadratic Programming min cTx + 1 xTHx subject to ` ≤ ≤ u 2 Ax MINOS, SQOPT, SNOPT, QPBLUR LSSOL (H = BTB, least squares), QPOPT (H indefinite) CLP, CPLEX, Gurobi, LANCELOT, LOQO, MOSEK BC Bound Constraints min φ(x) subject to ` ≤ x ≤ u MINOS, SNOPT LANCELOT, L-BFGS-B x LC Linear Constraints min φ(x) subject to ` ≤ ≤ u Ax MINOS, SNOPT, NPSOL 0 x 1 NC Nonlinear Constraints min φ(x) subject to ` ≤ @ Ax A ≤ u MINOS, SNOPT, NPSOL c(x) CONOPT, LANCELOT Filter, KNITRO, LOQO (second derivatives) IPOPT (open source solver [30]) Algorithms for finding local optima are used to construct algorithms for more complex optimization problems: stochastic, nonsmooth, global, mixed integer.
    [Show full text]
  • SYMPHONY 5.6.9 User's Manual
    SYMPHONY 5.6.9 User's Manual 1 T.K. Ralphs2 M. Guzelsoy3 A. Mahajan4 March 10, 2015 1This research was partially supported by NSF Grants DMS-9527124, DMI-0534862, DMI-0522796, CMMI-0728011, CMMI-1130914, as well as Texas ATP Grant 97-3604-010. A revised version of Chapters 4 of this manual now appears in the Springer-Verlag book Computational Combinatorial Optimization edited by M. J¨ungerand D. Naddef, see http://link.springer.de/link/service/series/0558/tocs/t2241.htm 2Department of Industrial and Systems Engineering, Lehigh University, Bethlehem, PA 18017, [email protected], http://coral.ielehigh.edu/~ted 3SAS Institute, Cary, NC [email protected] 4Department of Industrial Engineering and Operations Research, IIT Bombay, India [email protected], http://www.ieor.iitb.ac.in/amahajan c 2000-2015 Ted Ralphs Acknowledgments First and foremost, many thanks are due to Laci Lad´anyi who worked with me on the development of a very early precursor of SYMPHONY called COMPSys many years ago now and who taught me much of what I then knew about programming. Thanks are due also to Marta Es¨o,who wrote an early draft of this manual for what was then COMPSys. This release would not have been possible without the help of both Menal G¨uzelsoy, who has been instrumental in the development of SYMPHONY since version 4.0, and Ashutosh Mahajan, who has worked on SYMPHONY since version 5.0. In particular, Ashutosh and Menal did all of the work that went into improving SYMPHONY for release 5.2. I would also like to thank Matthew Galati and Ondrej Medek, who contributed to the development of SYMPHONY over the years.
    [Show full text]
  • A MINTO Short Course 1
    A MINTO short course Martin WPSavelsb ergh George L Nemhauser Georgia Institute of Technology School of Industrial and Systems Engineering Atlanta GA February Intro duction MINTO is a software system that solves mixedinteger linear programs by a branchand b ound algorithm with linear programming relaxations It also provides automatic con straint classication prepro cessing primal heuristics and constraint generation More over the user can enrich the basic algorithm by providing a variety of sp ecialized appli cation routines that can customize MINTO to achievemaximum eciency for a problem class An overview of MINTO discussing the design philosophy and general features can b e found in Nemhauser Savelsb ergh and Sigismondi a detailed description of the customization options can b e found in Savelsb ergh and Nemhauser and an indepth presentation of some of the system functions of MINTO can b e found in Savelsb ergh and Gu Nemhauser and Savelsb ergh a b a b c General references on mixedinteger linear programming are Schrijver and Nemhauser and Wolsey This short course illustrates many of MINTOs capabilities and teaches the basic MINTO customization pro cess through a set of exercises Most exercises require the development of one or more application functions in order to customize MINTO so as to b e more eective on a class of problems The exercises are presented in order of increasing diculty This short course can b e used to complement a standard course on integer programming Because MINTO always works with a maximization problem
    [Show full text]
  • MINLP Solver Software
    MINLP Solver Software Michael R. Bussieck and Stefan Vigerske GAMS Development Corp., 1217 Potomac St, NW Washington, DC 20007, USA [email protected], [email protected] March 10, 2014 Abstract In this article we give a brief overview of the start-of-the-art in software for the solution of mixed integer nonlinear programs (MINLP). We establish several groupings with respect to various features and give concise individual descriptions for each solver. The provided information may guide the selection of a best solver for a particular MINLP problem. Keywords: mixed integer nonlinear programming, solver, software, MINLP, MIQCP 1 Introduction The general form of an MINLP is minimize f(x; y) subject to g(x; y) ≤ 0 (P) x 2 X y 2 Y integer n+s n+s m The function f : R ! R is a possibly nonlinear objective function and g : R ! R a possibly nonlinear constraint function. Most algorithms require the functions f and g to be continuous and differentiable, but some may even allow for singular discontinuities. The variables x and y are the decision variables, where y is required to be integer valued. The n s sets X ⊆ R and Y ⊆ R are bounding-box-type restrictions on the variables. Additionally to integer requirements on variables, other kinds of discrete constraints are commonly used. These are, e.g., special-ordered-set constraints (only one (SOS type 1) or two consecutive (SOS type 2) variables in an (ordered) set are allowed to be nonzero) [8], semicontinuous variables (the variable is allowed to take either the value zero or a value above some bound), semiinteger variables (like semicontinuous variables, but with an additional integer restric- tion), and indicator variables (a binary variable indicates whether a certain set of constraints has to be enforced).
    [Show full text]