User's Guide for Tomlab 7

Total Page:16

File Type:pdf, Size:1020Kb

User's Guide for Tomlab 7 USER'S GUIDE FOR TOMLAB 71 Kenneth Holmstr¨om2, Anders O. G¨oran3 and Marcus M. Edvall4 May 5, 2010 -TOMLAB NOW INCLUDES THE MODELING ENGINE, TomSym [See section 4.3]! 1More information available at the TOMLAB home page: http://tomopt.com/ and at the Applied Optimization and Modeling TOM home page http://www.ima.mdh.se/tom. E-mail: [email protected]. 2Professor in Optimization, M¨alardalenUniversity, Department of Mathematics and Physics, P.O. Box 883, SE-721 23 V¨aster˚as, Sweden, [email protected]. 3Tomlab Optimization AB, V¨aster˚asTechnology Park, Trefasgatan 4, SE-721 30 V¨aster˚as,Sweden, [email protected]. 4Tomlab Optimization Inc., 1260 SE Bishop Blvd Ste E, Pullman, WA 99163, USA, [email protected]. 1 Contents Contents 2 1 Introduction 8 1.1 What is TOMLAB?............................................8 1.2 The Organization of This Guide.....................................9 1.3 Further Reading.............................................. 10 2 Overall Design 11 2.1 Structure Input and Output........................................ 11 2.2 Introduction to Solver and Problem Types................................ 11 2.3 The Process of Solving Optimization Problems............................. 13 2.4 Low Level Routines and Gateway Routines............................... 15 3 Problem Types and Solver Routines 18 3.1 Problem Types Defined in TOMLAB................................... 18 3.2 Solver Routines in TOMLAB....................................... 23 3.2.1 TOMLAB Base Module...................................... 23 3.2.2 TOMLAB /BARNLP....................................... 24 3.2.3 TOMLAB /CGO.......................................... 24 3.2.4 TOMLAB /CONOPT....................................... 25 3.2.5 TOMLAB /CPLEX........................................ 25 3.2.6 TOMLAB /KNITRO....................................... 25 3.2.7 TOMLAB /LGO.......................................... 25 3.2.8 TOMLAB /MINLP........................................ 25 3.2.9 TOMLAB /MINOS........................................ 25 3.2.10 TOMLAB /OQNLP........................................ 25 3.2.11 TOMLAB /NLPQL........................................ 26 3.2.12 TOMLAB /NPSOL........................................ 26 3.2.13 TOMLAB /PENBMI....................................... 27 3.2.14 TOMLAB /PENSDP....................................... 27 3.2.15 TOMLAB /SNOPT........................................ 27 3.2.16 TOMLAB /SOL.......................................... 27 3.2.17 TOMLAB /SPRNLP....................................... 27 3.2.18 TOMLAB /XA........................................... 28 2 3.2.19 TOMLAB /Xpress......................................... 28 3.2.20 Finding Available Solvers..................................... 29 4 Defining Problems in TOMLAB 31 4.1 The TOMLAB Format........................................... 31 4.2 Modifying existing problems........................................ 32 4.2.1 add A................................................ 32 4.2.2 keep A................................................ 33 4.2.3 remove A.............................................. 33 4.2.4 replace A.............................................. 34 4.2.5 modify b L............................................. 34 4.2.6 modify b U............................................. 35 4.2.7 modify c............................................... 35 4.2.8 modify c L............................................. 36 4.2.9 modify c U............................................. 36 4.2.10 modify x 0............................................. 36 4.2.11 modify x L............................................. 37 4.2.12 modify x U............................................. 37 4.3 TomSym................................................... 39 4.3.1 Modeling.............................................. 39 4.3.2 Ezsolve............................................... 40 4.3.3 Usage................................................ 41 4.3.4 Scaling variables.......................................... 42 4.3.5 SDP/LMI/BMI interface..................................... 42 4.3.6 Interface to MAD and finite differences............................. 42 4.3.7 Simplifications........................................... 44 4.3.8 Special functions.......................................... 45 4.3.9 Procedure vs parse-tree...................................... 46 4.3.10 Problems and error messages................................... 49 4.3.11 Example............................................... 49 5 Solving Linear, Quadratic and Integer Programming Problems 51 5.1 Linear Programming Problems...................................... 51 5.1.1 A Quick Linear Programming Solution.............................. 52 5.2 Quadratic Programming Problems.................................... 53 5.2.1 A Quick Quadratic Programming solution............................ 53 3 5.3 Mixed-Integer Programming Problems.................................. 55 6 Solving Unconstrained and Constrained Optimization Problems 60 6.1 Defining the Problem in Matlab m-files................................. 60 6.1.1 Communication between user routines.............................. 63 6.2 Unconstrained Optimization Problems.................................. 64 6.3 Direct Call to an Optimization Routine................................. 66 6.4 Constrained Optimization Problems................................... 66 6.5 Efficient use of the TOMLAB solvers................................... 69 7 Solving Global Optimization Problems 70 7.1 Box-Bounded Global Optimization Problems.............................. 70 7.2 Global Mixed-Integer Nonlinear Problems................................ 72 8 Solving Least Squares and Parameter Estimation Problems 74 8.1 Linear Least Squares Problems...................................... 74 8.2 Linear Least Squares Problems using the SOL Solver LSSOL..................... 75 8.3 Nonlinear Least Squares Problems.................................... 76 8.4 Fitting Sums of Exponentials to Empirical Data............................ 79 8.5 Large Scale LS problems with Tlsqr................................... 81 9 Multi Layer Optimization 84 10 tomHelp - The Help Program 85 11 TOMLAB Solver Reference 86 11.1 TOMLAB Base Module.......................................... 86 11.1.1 clsSolve............................................... 86 11.1.2 conSolve............................................... 90 11.1.3 cutPlane............................................... 94 11.1.4 DualSolve.............................................. 97 11.1.5 expSolve............................................... 100 11.1.6 glbDirect.............................................. 102 11.1.7 glbSolve............................................... 106 11.1.8 glcCluster.............................................. 109 11.1.9 glcDirect.............................................. 112 11.1.10 glcSolve............................................... 118 11.1.11 infLinSolve............................................. 123 4 11.1.12 infSolve............................................... 125 11.1.13 linRatSolve............................................. 127 11.1.14 lpSimplex.............................................. 129 11.1.15 L1Solve............................................... 131 11.1.16 MILPSOLVE............................................ 133 11.1.17 minlpSolve............................................. 145 11.1.18 mipSolve.............................................. 150 11.1.19 multiMin.............................................. 153 11.1.20 multiMINLP............................................ 157 11.1.21 nlpSolve............................................... 160 11.1.22 pdcoTL............................................... 163 11.1.23 pdscoTL............................................... 166 11.1.24 qpSolve............................................... 169 11.1.25 slsSolve............................................... 171 11.1.26 sTrustr............................................... 174 11.1.27 Tfmin................................................ 177 11.1.28 Tfzero................................................ 178 11.1.29 ucSolve............................................... 180 11.1.30 Additional solvers......................................... 183 12 TOMLAB Utility Functions 184 12.1 tomRun................................................... 184 12.2 addPwLinFun................................................ 186 12.3 binbin2lin.................................................. 188 12.4 bincont2lin................................................. 189 12.5 checkFuncs................................................. 190 12.6 checkDerivs................................................. 191 12.7 cpTransf................................................... 192 12.8 estBestHessian............................................... 194 12.9 lls2qp.................................................... 195 12.10LineSearch................................................. 196 12.11preSolve................................................... 198 12.12PrintResult................................................. 199 12.13runtest.................................................... 201 12.14SolverList.................................................. 202 12.15StatLS.................................................... 203 5 12.16systest.................................................... 204 13 Approximation of Derivatives 205 14 Special Notes and Features 213 14.1 Speed and Solution of Optimization Subproblems...........................
Recommended publications
  • University of California, San Diego
    UNIVERSITY OF CALIFORNIA, SAN DIEGO Computational Methods for Parameter Estimation in Nonlinear Models A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Physics with a Specialization in Computational Physics by Bryan Andrew Toth Committee in charge: Professor Henry D. I. Abarbanel, Chair Professor Philip Gill Professor Julius Kuti Professor Gabriel Silva Professor Frank Wuerthwein 2011 Copyright Bryan Andrew Toth, 2011 All rights reserved. The dissertation of Bryan Andrew Toth is approved, and it is acceptable in quality and form for publication on microfilm and electronically: Chair University of California, San Diego 2011 iii DEDICATION To my grandparents, August and Virginia Toth and Willem and Jane Keur, who helped put me on a lifelong path of learning. iv EPIGRAPH An Expert: One who knows more and more about less and less, until eventually he knows everything about nothing. |Source Unknown v TABLE OF CONTENTS Signature Page . iii Dedication . iv Epigraph . v Table of Contents . vi List of Figures . ix List of Tables . x Acknowledgements . xi Vita and Publications . xii Abstract of the Dissertation . xiii Chapter 1 Introduction . 1 1.1 Dynamical Systems . 1 1.1.1 Linear and Nonlinear Dynamics . 2 1.1.2 Chaos . 4 1.1.3 Synchronization . 6 1.2 Parameter Estimation . 8 1.2.1 Kalman Filters . 8 1.2.2 Variational Methods . 9 1.2.3 Parameter Estimation in Nonlinear Systems . 9 1.3 Dissertation Preview . 10 Chapter 2 Dynamical State and Parameter Estimation . 11 2.1 Introduction . 11 2.2 DSPE Overview . 11 2.3 Formulation . 12 2.3.1 Least Squares Minimization .
    [Show full text]
  • How to Use Your Favorite MIP Solver: Modeling, Solving, Cannibalizing
    How to use your favorite MIP Solver: modeling, solving, cannibalizing Andrea Lodi University of Bologna, Italy [email protected] January-February, 2012 @ Universit¨atWien A. Lodi, How to use your favorite MIP Solver Setting • We consider a general Mixed Integer Program in the form: T maxfc x : Ax ≤ b; x ≥ 0; xj 2 Z; 8j 2 Ig (1) where matrix A does not have a special structure. A. Lodi, How to use your favorite MIP Solver 1 Setting • We consider a general Mixed Integer Program in the form: T maxfc x : Ax ≤ b; x ≥ 0; xj 2 Z; 8j 2 Ig (1) where matrix A does not have a special structure. • Thus, the problem is solved through branch-and-bound and the bounds are computed by iteratively solving the LP relaxations through a general-purpose LP solver. A. Lodi, How to use your favorite MIP Solver 1 Setting • We consider a general Mixed Integer Program in the form: T maxfc x : Ax ≤ b; x ≥ 0; xj 2 Z; 8j 2 Ig (1) where matrix A does not have a special structure. • Thus, the problem is solved through branch-and-bound and the bounds are computed by iteratively solving the LP relaxations through a general-purpose LP solver. • The course basically covers the MIP but we will try to discuss when possible how crucial is the LP component (the engine), and how much the whole framework is built on top the capability of effectively solving LPs. • Roughly speaking, using the LP computation as a tool, MIP solvers integrate the branch-and-bound and the cutting plane algorithms through variations of the general branch-and-cut scheme [Padberg & Rinaldi 1987] developed in the context of the Traveling Salesman Problem (TSP).
    [Show full text]
  • Julia, My New Friend for Computing and Optimization? Pierre Haessig, Lilian Besson
    Julia, my new friend for computing and optimization? Pierre Haessig, Lilian Besson To cite this version: Pierre Haessig, Lilian Besson. Julia, my new friend for computing and optimization?. Master. France. 2018. cel-01830248 HAL Id: cel-01830248 https://hal.archives-ouvertes.fr/cel-01830248 Submitted on 4 Jul 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. « Julia, my new computing friend? » | 14 June 2018, IETR@Vannes | By: L. Besson & P. Haessig 1 « Julia, my New frieNd for computiNg aNd optimizatioN? » Intro to the Julia programming language, for MATLAB users Date: 14th of June 2018 Who: Lilian Besson & Pierre Haessig (SCEE & AUT team @ IETR / CentraleSupélec campus Rennes) « Julia, my new computing friend? » | 14 June 2018, IETR@Vannes | By: L. Besson & P. Haessig 2 AgeNda for today [30 miN] 1. What is Julia? [5 miN] 2. ComparisoN with MATLAB [5 miN] 3. Two examples of problems solved Julia [5 miN] 4. LoNger ex. oN optimizatioN with JuMP [13miN] 5. LiNks for more iNformatioN ? [2 miN] « Julia, my new computing friend? » | 14 June 2018, IETR@Vannes | By: L. Besson & P. Haessig 3 1. What is Julia ? Open-source and free programming language (MIT license) Developed since 2012 (creators: MIT researchers) Growing popularity worldwide, in research, data science, finance etc… Multi-platform: Windows, Mac OS X, GNU/Linux..
    [Show full text]
  • Solving Mixed Integer Linear and Nonlinear Problems Using the SCIP Optimization Suite
    Takustraße 7 Konrad-Zuse-Zentrum D-14195 Berlin-Dahlem fur¨ Informationstechnik Berlin Germany TIMO BERTHOLD GERALD GAMRATH AMBROS M. GLEIXNER STEFAN HEINZ THORSTEN KOCH YUJI SHINANO Solving mixed integer linear and nonlinear problems using the SCIP Optimization Suite Supported by the DFG Research Center MATHEON Mathematics for key technologies in Berlin. ZIB-Report 12-27 (July 2012) Herausgegeben vom Konrad-Zuse-Zentrum f¨urInformationstechnik Berlin Takustraße 7 D-14195 Berlin-Dahlem Telefon: 030-84185-0 Telefax: 030-84185-125 e-mail: [email protected] URL: http://www.zib.de ZIB-Report (Print) ISSN 1438-0064 ZIB-Report (Internet) ISSN 2192-7782 Solving mixed integer linear and nonlinear problems using the SCIP Optimization Suite∗ Timo Berthold Gerald Gamrath Ambros M. Gleixner Stefan Heinz Thorsten Koch Yuji Shinano Zuse Institute Berlin, Takustr. 7, 14195 Berlin, Germany, fberthold,gamrath,gleixner,heinz,koch,[email protected] July 31, 2012 Abstract This paper introduces the SCIP Optimization Suite and discusses the ca- pabilities of its three components: the modeling language Zimpl, the linear programming solver SoPlex, and the constraint integer programming frame- work SCIP. We explain how these can be used in concert to model and solve challenging mixed integer linear and nonlinear optimization problems. SCIP is currently one of the fastest non-commercial MIP and MINLP solvers. We demonstrate the usage of Zimpl, SCIP, and SoPlex by selected examples, give an overview of available interfaces, and outline plans for future development. ∗A Japanese translation of this paper will be published in the Proceedings of the 24th RAMP Symposium held at Tohoku University, Miyagi, Japan, 27{28 September 2012, see http://orsj.or.
    [Show full text]
  • [20Pt]Algorithms for Constrained Optimization: [ 5Pt]
    SOL Optimization 1970s 1980s 1990s 2000s 2010s Summary 2020s Algorithms for Constrained Optimization: The Benefits of General-purpose Software Michael Saunders MS&E and ICME, Stanford University California, USA 3rd AI+IoT Business Conference Shenzhen, China, April 25, 2019 Optimization Software 3rd AI+IoT Business Conference, Shenzhen, April 25, 2019 1/39 SOL Optimization 1970s 1980s 1990s 2000s 2010s Summary 2020s SOL Systems Optimization Laboratory George Dantzig, Stanford University, 1974 Inventor of the Simplex Method Father of linear programming Large-scale optimization: Algorithms, software, applications Optimization Software 3rd AI+IoT Business Conference, Shenzhen, April 25, 2019 2/39 SOL Optimization 1970s 1980s 1990s 2000s 2010s Summary 2020s SOL history 1974 Dantzig and Cottle start SOL 1974{78 John Tomlin, LP/MIP expert 1974{2005 Alan Manne, nonlinear economic models 1975{76 MS, MINOS first version 1979{87 Philip Gill, Walter Murray, MS, Margaret Wright (Gang of 4!) 1989{ Gerd Infanger, stochastic optimization 1979{ Walter Murray, MS, many students 2002{ Yinyu Ye, optimization algorithms, especially interior methods This week! UC Berkeley opened George B. Dantzig Auditorium Optimization Software 3rd AI+IoT Business Conference, Shenzhen, April 25, 2019 3/39 SOL Optimization 1970s 1980s 1990s 2000s 2010s Summary 2020s Optimization problems Minimize an objective function subject to constraints: 0 x 1 min '(x) st ` ≤ @ Ax A ≤ u c(x) x variables 0 1 A matrix c1(x) B . C c(x) nonlinear functions @ . A c (x) `; u bounds m Optimization
    [Show full text]
  • Propt Product Sheet
    PROPT - The world’s fastest Optimal Control platform for MATLAB. PROPT - ONE OF A KIND, LIGHTNING FAST SOLUTIONS TO YOUR OPTIMAL CONTROL PROBLEMS! NOW WITH WELL OVER 100 TEST CASES! The PROPT software package is intended to When using PROPT, optimally coded analytical solve dynamic optimization problems. Such first and second order derivatives, including problems are usually described by: problem sparsity patterns are automatically generated, thereby making it the first MATLAB • A state-space model of package to be able to fully utilize a system. This can be NLP (and QP) solvers such as: either a set of ordinary KNITRO, CONOPT, SNOPT and differential equations CPLEX. (ODE) or differential PROPT currently uses Gauss or algebraic equations (PAE). Chebyshev-point collocation • Initial and/or final for solving optimal control conditions (sometimes problems. However, the code is also conditions at other written in a more general way, points). allowing for a DAE rather than an ODE formulation. Parameter • A cost functional, i.e. a estimation problems are also scalar value that depends possible to solve. on the state trajectories and the control function. PROPT has three main functions: • Sometimes, additional equations and variables • Computation of the that, for example, relate constant matrices used for the the initial and final differentiation and integration conditions to each other. of the polynomials used to approximate the solution to the trajectory optimization problem. The goal of PROPT is to make it possible to input such problem • Source transformation to turn descriptions as simply as user-supplied expressions into possible, without having to worry optimized MATLAB code for the about the mathematics of the cost function f and constraint actual solver.
    [Show full text]
  • Full Text (Pdf)
    A Toolchain for Solving Dynamic Optimization Problems Using Symbolic and Parallel Computing Evgeny Lazutkin Siegbert Hopfgarten Abebe Geletu Pu Li Group Simulation and Optimal Processes, Institute for Automation and Systems Engineering, Technische Universität Ilmenau, P.O. Box 10 05 65, 98684 Ilmenau, Germany. {evgeny.lazutkin,siegbert.hopfgarten,abebe.geletu,pu.li}@tu-ilmenau.de Abstract shown in Fig. 1. Based on the current process state x(k) obtained through the state observer or measurement, Significant progresses in developing approaches to dy- resp., the optimal control problem is solved in the opti- namic optimization have been made. However, its prac- mizer in each sample time. The resulting optimal control tical implementation poses a difficult task and its real- strategy in the first interval u(k) of the moving horizon time application such as in nonlinear model predictive is then realized through the local control system. There- control (NMPC) remains challenging. A toolchain is de- fore, an essential limitation of applying NMPC is due to veloped in this work to relieve the implementation bur- its long computation time taken to solve the NLP prob- den and, meanwhile, to speed up the computations for lem for each sample time, especially for the control of solving the dynamic optimization problem. To achieve fast systems (Wang and Boyd, 2010). In general, the these targets, symbolic computing is utilized for calcu- computation time should be much less than the sample lating the first and second order sensitivities on the one time of the NMPC scheme (Schäfer et al., 2007). Al- hand and parallel computing is used for separately ac- though powerful methods are available, e.g.
    [Show full text]
  • Numericaloptimization
    Numerical Optimization Alberto Bemporad http://cse.lab.imtlucca.it/~bemporad/teaching/numopt Academic year 2020-2021 Course objectives Solve complex decision problems by using numerical optimization Application domains: • Finance, management science, economics (portfolio optimization, business analytics, investment plans, resource allocation, logistics, ...) • Engineering (engineering design, process optimization, embedded control, ...) • Artificial intelligence (machine learning, data science, autonomous driving, ...) • Myriads of other applications (transportation, smart grids, water networks, sports scheduling, health-care, oil & gas, space, ...) ©2021 A. Bemporad - Numerical Optimization 2/102 Course objectives What this course is about: • How to formulate a decision problem as a numerical optimization problem? (modeling) • Which numerical algorithm is most appropriate to solve the problem? (algorithms) • What’s the theory behind the algorithm? (theory) ©2021 A. Bemporad - Numerical Optimization 3/102 Course contents • Optimization modeling – Linear models – Convex models • Optimization theory – Optimality conditions, sensitivity analysis – Duality • Optimization algorithms – Basics of numerical linear algebra – Convex programming – Nonlinear programming ©2021 A. Bemporad - Numerical Optimization 4/102 References i ©2021 A. Bemporad - Numerical Optimization 5/102 Other references • Stephen Boyd’s “Convex Optimization” courses at Stanford: http://ee364a.stanford.edu http://ee364b.stanford.edu • Lieven Vandenberghe’s courses at UCLA: http://www.seas.ucla.edu/~vandenbe/ • For more tutorials/books see http://plato.asu.edu/sub/tutorials.html ©2021 A. Bemporad - Numerical Optimization 6/102 Optimization modeling What is optimization? • Optimization = assign values to a set of decision variables so to optimize a certain objective function • Example: Which is the best velocity to minimize fuel consumption ? fuel [ℓ/km] velocity [km/h] 0 30 60 90 120 160 ©2021 A.
    [Show full text]
  • Derivative-Free Optimization: a Review of Algorithms and Comparison of Software Implementations
    J Glob Optim (2013) 56:1247–1293 DOI 10.1007/s10898-012-9951-y Derivative-free optimization: a review of algorithms and comparison of software implementations Luis Miguel Rios · Nikolaos V. Sahinidis Received: 20 December 2011 / Accepted: 23 June 2012 / Published online: 12 July 2012 © Springer Science+Business Media, LLC. 2012 Abstract This paper addresses the solution of bound-constrained optimization problems using algorithms that require only the availability of objective function values but no deriv- ative information. We refer to these algorithms as derivative-free algorithms. Fueled by a growing number of applications in science and engineering, the development of derivative- free optimization algorithms has long been studied, and it has found renewed interest in recent time. Along with many derivative-free algorithms, many software implementations have also appeared. The paper presents a review of derivative-free algorithms, followed by a systematic comparison of 22 related implementations using a test set of 502 problems. The test bed includes convex and nonconvex problems, smooth as well as nonsmooth prob- lems. The algorithms were tested under the same conditions and ranked under several crite- ria, including their ability to find near-global solutions for nonconvex problems, improve a given starting point, and refine a near-optimal solution. A total of 112,448 problem instances were solved. We find that the ability of all these solvers to obtain good solutions dimin- ishes with increasing problem size. For the problems used in this study, TOMLAB/MULTI- MIN, TOMLAB/GLCCLUSTER, MCS and TOMLAB/LGO are better, on average, than other derivative-free solvers in terms of solution quality within 2,500 function evaluations.
    [Show full text]
  • TOMLAB –Unique Features for Optimization in MATLAB
    TOMLAB –Unique Features for Optimization in MATLAB Bad Honnef, Germany October 15, 2004 Kenneth Holmström Tomlab Optimization AB Västerås, Sweden [email protected] ´ Professor in Optimization Department of Mathematics and Physics Mälardalen University, Sweden http://tomlab.biz Outline of the talk • The TOMLAB Optimization Environment – Background and history – Technology available • Optimization in TOMLAB • Tests on customer supplied large-scale optimization examples • Customer cases, embedded solutions, consultant work • Business perspective • Box-bounded global non-convex optimization • Summary http://tomlab.biz Background • MATLAB – a high-level language for mathematical calculations, distributed by MathWorks Inc. • Matlab can be extended by Toolboxes that adds to the features in the software, e.g.: finance, statistics, control, and optimization • Why develop the TOMLAB Optimization Environment? – A uniform approach to optimization didn’t exist in MATLAB – Good optimization solvers were missing – Large-scale optimization was non-existent in MATLAB – Other toolboxes needed robust and fast optimization – Technical advantages from the MATLAB languages • Fast algorithm development and modeling of applied optimization problems • Many built in functions (ODE, linear algebra, …) • GUIdevelopmentfast • Interfaceable with C, Fortran, Java code http://tomlab.biz History of Tomlab • President and founder: Professor Kenneth Holmström • The company founded 1986, Development started 1989 • Two toolboxes NLPLIB och OPERA by 1995 • Integrated format for optimization 1996 • TOMLAB introduced, ISMP97 in Lausanne 1997 • TOMLAB v1.0 distributed for free until summer 1999 • TOMLAB v2.0 first commercial version fall 1999 • TOMLAB starting sales from web site March 2000 • TOMLAB v3.0 expanded with external solvers /SOL spring 2001 • Dash Optimization Ltd’s XpressMP added in TOMLAB fall 2001 • Tomlab Optimization Inc.
    [Show full text]
  • Process Optimization
    Process Optimization Mathematical Programming and Optimization of Multi-Plant Operations and Process Design Ralph W. Pike Director, Minerals Processing Research Institute Horton Professor of Chemical Engineering Louisiana State University Department of Chemical Engineering, Lamar University, April, 10, 2007 Process Optimization • Typical Industrial Problems • Mathematical Programming Software • Mathematical Basis for Optimization • Lagrange Multipliers and the Simplex Algorithm • Generalized Reduced Gradient Algorithm • On-Line Optimization • Mixed Integer Programming and the Branch and Bound Algorithm • Chemical Production Complex Optimization New Results • Using one computer language to write and run a program in another language • Cumulative probability distribution instead of an optimal point using Monte Carlo simulation for a multi-criteria, mixed integer nonlinear programming problem • Global optimization Design vs. Operations • Optimal Design −Uses flowsheet simulators and SQP – Heuristics for a design, a superstructure, an optimal design • Optimal Operations – On-line optimization – Plant optimal scheduling – Corporate supply chain optimization Plant Problem Size Contact Alkylation Ethylene 3,200 TPD 15,000 BPD 200 million lb/yr Units 14 76 ~200 Streams 35 110 ~4,000 Constraints Equality 761 1,579 ~400,000 Inequality 28 50 ~10,000 Variables Measured 43 125 ~300 Unmeasured 732 1,509 ~10,000 Parameters 11 64 ~100 Optimization Programming Languages • GAMS - General Algebraic Modeling System • LINDO - Widely used in business applications
    [Show full text]
  • Optimal Control Theory Version 0.2
    An Introduction to Mathematical Optimal Control Theory Version 0.2 By Lawrence C. Evans Department of Mathematics University of California, Berkeley Chapter 1: Introduction Chapter 2: Controllability, bang-bang principle Chapter 3: Linear time-optimal control Chapter 4: The Pontryagin Maximum Principle Chapter 5: Dynamic programming Chapter 6: Game theory Chapter 7: Introduction to stochastic control theory Appendix: Proofs of the Pontryagin Maximum Principle Exercises References 1 PREFACE These notes build upon a course I taught at the University of Maryland during the fall of 1983. My great thanks go to Martino Bardi, who took careful notes, saved them all these years and recently mailed them to me. Faye Yeager typed up his notes into a first draft of these lectures as they now appear. Scott Armstrong read over the notes and suggested many improvements: thanks, Scott. Stephen Moye of the American Math Society helped me a lot with AMSTeX versus LaTeX issues. My thanks also to Atilla Yilmaz for spotting lots of typos and errors, which I have corrected. I have radically modified much of the notation (to be consistent with my other writings), updated the references, added several new examples, and provided a proof of the Pontryagin Maximum Principle. As this is a course for undergraduates, I have dispensed in certain proofs with various measurability and continuity issues, and as compensation have added various critiques as to the lack of total rigor. This current version of the notes is not yet complete, but meets I think the usual high standards for material posted on the internet. Please email me at [email protected] with any corrections or comments.
    [Show full text]