Tomlab User's Guide the Tomlab Manual
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Optimal Steering Control Input Generation for Vehicle's Entry Speed Maximization in a Double-Lane Change Manoeuvre
Optimal steering control input generation for vehicle's entry speed maximization in a double-lane change manoeuvre Matthias Tidlund Stavros Angelis Vehicle Engineering KTH Royal Institute of Technology Master Thesis TRITA-AVE 2013:64 ISSN 1651-7660 Postal address Visiting Address Telephone Telefax Internet KTH Teknikringen 8 +46 8 790 6000 +46 8 790 9290 www.kth.se Vehicle Dynamics Stockholm SE-100 44 Stockholm, Sweden Acknowledgment This thesis study was performed between June and November 2013 at Volvo Cars’ Active Safety CAE department, which provided a really inspiring environment with skilled colleagues and the opportunity to get an insight of their work. Our Volvo Cars supervisor Diomidis Katzourakis, CAE Vehicle Dynamics engineer, has constantly provided invaluable feedback regarding both the content of the thesis as well as our presentations at Volvo. He has been a great knowledge asset and always available as source of answers and ideas when the vehicle’s dynamic complexity was decided and modelled. We also would like to thank our supervisor Mikael Nybacka, Assistant Professor in Vehicle Dynamics at KTH Royal Institute of Technology, for the support and guidance to reach the goal of delivering a report of high quality, for the scheduling and the timeline of this work, and the opportunity to present this work in parts so to have a better overview of its progress and quality. A special thanks should also be given to Mathias Lidberg, Associate Professor in Vehicle Dynamics at Chalmers Technical University, for his active participation in the project, not only saving us a great deal of time in the beginning by helping us understand the optimization tool Tomlab and the parts within an optimization problem which are most important, but also for constantly providing input with ideas and feedback. -
OPTIMAL CONTROL of NONHOLONOMIC MECHANICAL SYSTEMS By
OPTIMAL CONTROL OF NONHOLONOMIC MECHANICAL SYSTEMS by Stuart Marcus Rogers A thesis submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Applied Mathematics Department of Mathematical and Statistical Sciences University of Alberta ⃝c Stuart Marcus Rogers, 2017 Abstract This thesis investigates the optimal control of two nonholonomic mechanical systems, Suslov's problem and the rolling ball. Suslov's problem is a nonholonomic variation of the classical rotating free rigid body problem, in which the body angular velocity Ω(t) must always be orthogonal to a prescribed, time-varying body frame vector ξ(t), i.e. hΩ(t); ξ(t)i = 0. The motion of the rigid body in Suslov's problem is actuated via ξ(t), while the motion of the rolling ball is actuated via internal point masses that move along rails fixed within the ball. First, by applying Lagrange-d'Alembert's principle with Euler-Poincar´e'smethod, the uncontrolled equations of motion are derived. Then, by applying Pontryagin's minimum principle, the controlled equations of motion are derived, a solution of which obeys the uncontrolled equations of motion, satisfies prescribed initial and final conditions, and minimizes a prescribed performance index. Finally, the controlled equations of motion are solved numerically by a continuation method, starting from an initial solution obtained analytically (in the case of Suslov's problem) or via a direct method (in the case of the rolling ball). ii Preface This thesis contains material that has appeared in a pair of papers, one on Suslov's problem and the other on rolling ball robots, co-authored with my supervisor, Vakhtang Putkaradze. -
Mad Product Sheet
MAD - MATLAB Automatic Differential package. MATLAB Automatic Differentiation (MAD) is a professionally maintained and developed automatic differentiation tool for MATLAB. Many new features are added continuously since the development and additions are made in close cooperation with the user base. Automatic differentiation is generally defined as (after Andreas Griewank, Evaluating Derivatives - Principles and Techniques of Algorithmic Differentiation, SIAM 2000): “Algorithmic, or automatic, differentiation (AD) is concerned with the accurate and efficient evaluation of derivatives for functions defined by computer programs. No truncation errors are incurred, and the resulting numerical derivative values can be used for all scientific computations that are based on linear, quadratic, or even higher order approximations to nonlinear scalar or vector functions.” MAD is a MATLAB library of functions and utilities for the automatic differentiation of MATLAB functions and statements. MAD utilizes an optimized class library “derivvec”, for the linear combination of derivative vectors. Key Features MAD Example A variety of trigonometric functions. >> x = fmad(3,1); FFT functions available (FFT, IFFT). >> y = x^2 + x^3; >> getderivs(y) interp1 and interp2 optimized for differentiation. ans = 33 Delivers floating point precision derivatives (better robustness). A new MATLAB class fmad which overloads the builtin MATLAB arithmetic and some intrinsic functions. More than 100 built-in MATLAB operators have been implemented. The classes allow for the use of MATLAB’s sparse matrix representation to exploit sparsity in the derivative calculation at runtime. Normally faster or equally fast as numerical differentiation. Complete integration with all TOMLAB solvers. Possible to use as plug-in for modeling class tomSym and optimal control package PROPT. -
CME 338 Large-Scale Numerical Optimization Notes 2
Stanford University, ICME CME 338 Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2019 Notes 2: Overview of Optimization Software 1 Optimization problems We study optimization problems involving linear and nonlinear constraints: NP minimize φ(x) n x2R 0 x 1 subject to ` ≤ @ Ax A ≤ u; c(x) where φ(x) is a linear or nonlinear objective function, A is a sparse matrix, c(x) is a vector of nonlinear constraint functions ci(x), and ` and u are vectors of lower and upper bounds. We assume the functions φ(x) and ci(x) are smooth: they are continuous and have continuous first derivatives (gradients). Sometimes gradients are not available (or too expensive) and we use finite difference approximations. Sometimes we need second derivatives. We study algorithms that find a local optimum for problem NP. Some examples follow. If there are many local optima, the starting point is important. x LP Linear Programming min cTx subject to ` ≤ ≤ u Ax MINOS, SNOPT, SQOPT LSSOL, QPOPT, NPSOL (dense) CPLEX, Gurobi, LOQO, HOPDM, MOSEK, XPRESS CLP, lp solve, SoPlex (open source solvers [7, 34, 57]) x QP Quadratic Programming min cTx + 1 xTHx subject to ` ≤ ≤ u 2 Ax MINOS, SQOPT, SNOPT, QPBLUR LSSOL (H = BTB, least squares), QPOPT (H indefinite) CLP, CPLEX, Gurobi, LANCELOT, LOQO, MOSEK BC Bound Constraints min φ(x) subject to ` ≤ x ≤ u MINOS, SNOPT LANCELOT, L-BFGS-B x LC Linear Constraints min φ(x) subject to ` ≤ ≤ u Ax MINOS, SNOPT, NPSOL 0 x 1 NC Nonlinear Constraints min φ(x) subject to ` ≤ @ Ax A ≤ u MINOS, SNOPT, NPSOL c(x) CONOPT, LANCELOT Filter, KNITRO, LOQO (second derivatives) IPOPT (open source solver [30]) Algorithms for finding local optima are used to construct algorithms for more complex optimization problems: stochastic, nonsmooth, global, mixed integer. -
Tomlab Product Sheet
TOMLAB® - For fast and robust large- scale optimization in MATLAB® The TOMLAB Optimization Environment is a powerful optimization and modeling package for solving applied optimization problems in MATLAB. TOMLAB provides a wide range of features, tools and services for your solution process: • A uniform approach to solving optimization problem. • A modeling class, tomSym, for lightning fast source transformation. • Automatic gateway routines for format mapping to different solver types. • Over 100 different algorithms for linear, discrete, global and nonlinear optimization. • A large number of fully integrated Fortran and C solvers. • Full integration with the MAD toolbox for automatic differentiation. • 6 different methods for numerical differentiation. • Unique features, like costly global black-box optimization and semi-definite programming with bilinear inequalities. • Demo licenses with no solver limitations. • Call compatibility with MathWorks’ Optimization Toolbox. • Very extensive example problem sets, more than 700 models. • Advanced support by our team of developers in Sweden and the USA. • TOMLAB is available for all MATLAB R2007b+ users. • Continuous solver upgrades and customized implementations. • Embedded solvers and stand-alone applications. • Easy installation for all supported platforms, Windows (32/64-bit) and Linux/OS X 64-bit For more information, see http://tomopt.com or e-mail [email protected]. Modeling Environment: http://tomsym.com. Dedicated Optimal Control Page: http://tomdyn.com. Tomlab Optimization Inc. Tomlab -
PROPT - Matlab Optimal Control Software
PROPT - Matlab Optimal Control Software - ONE OF A KIND, LIGHTNING FAST SOLUTIONS TO YOUR OPTIMAL CONTROL PROBLEMS! Per E. Rutquist1 and Marcus M. Edvall2 April 26, 2010 1Tomlab Optimization AB, V¨aster˚asTechnology Park, Trefasgatan 4, SE-721 30 V¨aster˚as,Sweden. 2Tomlab Optimization Inc., 1260 SE Bishop Blvd Ste E, Pullman, WA, USA. 1 Contents Contents 2 1 PROPT Guide Overview 22 1.1 Installation................................................. 22 1.2 Foreword to the software.......................................... 22 1.3 Initial remarks............................................... 22 2 Introduction to PROPT 24 2.1 Overview of PROPT syntax........................................ 24 2.2 Vector representation........................................... 25 2.3 Global optimality............................................. 25 3 Modeling optimal control problems 27 3.1 A simple example.............................................. 27 3.2 Code generation.............................................. 28 3.3 Modeling.................................................. 29 3.3.1 Modeling notes........................................... 31 3.4 Independent variables, scalars and constants............................... 33 3.5 State and control variables......................................... 33 3.6 Boundary, path, event and integral constraints............................. 34 4 Multi-phase optimal control 34 5 Scaling of optimal control problems 34 6 Setting solver and options 35 7 Solving optimal control problems 36 7.1 Standard functions -
User's Guide for Tomlab 7
USER'S GUIDE FOR TOMLAB 71 Kenneth Holmstr¨om2, Anders O. G¨oran3 and Marcus M. Edvall4 May 5, 2010 -TOMLAB NOW INCLUDES THE MODELING ENGINE, TomSym [See section 4.3]! 1More information available at the TOMLAB home page: http://tomopt.com/ and at the Applied Optimization and Modeling TOM home page http://www.ima.mdh.se/tom. E-mail: [email protected]. 2Professor in Optimization, M¨alardalenUniversity, Department of Mathematics and Physics, P.O. Box 883, SE-721 23 V¨aster˚as, Sweden, [email protected]. 3Tomlab Optimization AB, V¨aster˚asTechnology Park, Trefasgatan 4, SE-721 30 V¨aster˚as,Sweden, [email protected]. 4Tomlab Optimization Inc., 1260 SE Bishop Blvd Ste E, Pullman, WA 99163, USA, [email protected]. 1 Contents Contents 2 1 Introduction 8 1.1 What is TOMLAB?............................................8 1.2 The Organization of This Guide.....................................9 1.3 Further Reading.............................................. 10 2 Overall Design 11 2.1 Structure Input and Output........................................ 11 2.2 Introduction to Solver and Problem Types................................ 11 2.3 The Process of Solving Optimization Problems............................. 13 2.4 Low Level Routines and Gateway Routines............................... 15 3 Problem Types and Solver Routines 18 3.1 Problem Types Defined in TOMLAB................................... 18 3.2 Solver Routines in TOMLAB....................................... 23 3.2.1 TOMLAB Base Module...................................... 23 3.2.2 TOMLAB /BARNLP....................................... 24 3.2.3 TOMLAB /CGO.......................................... 24 3.2.4 TOMLAB /CONOPT....................................... 25 3.2.5 TOMLAB /CPLEX........................................ 25 3.2.6 TOMLAB /KNITRO....................................... 25 3.2.7 TOMLAB /LGO.......................................... 25 3.2.8 TOMLAB /MINLP........................................ 25 3.2.9 TOMLAB /MINOS....................................... -
Overview of Optimization Software
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2018 Notes 2: Overview of Optimization Software 1 Optimization problems We study optimization problems involving linear and nonlinear constraints: NP minimize φ(x) n x∈R 0 x 1 subject to ` ≤ @ Ax A ≤ u, c(x) where φ(x) is a linear or nonlinear objective function, A is a sparse matrix, c(x) is a vector of nonlinear constraint functions ci(x), and ` and u are vectors of lower and upper bounds. We assume the functions φ(x) and ci(x) are smooth: they are continuous and have continuous first derivatives (gradients). Sometimes gradients are not available (or too expensive) and we use finite difference approximations. Sometimes we need second derivatives. We study algorithms that find a local optimum for problem NP. Some examples follow. If there are many local optima, the starting point is important. „ x « LP Linear Programming min cTx subject to ` ≤ ≤ u Ax MINOS, SNOPT, SQOPT LSSOL, QPOPT, NPSOL (dense) CPLEX, Gurobi, LOQO, HOPDM, MOSEK, XPRESS CLP, lp solve, SoPlex (open source solvers [7, 34, 54]) „ x « QP Quadratic Programming min cTx + 1 xTHx subject to ` ≤ ≤ u 2 Ax MINOS, SQOPT, SNOPT, QPBLUR LSSOL (H = BTB, least squares), QPOPT (H indefinite) CLP, CPLEX, Gurobi, LANCELOT, LOQO, MOSEK BC Bound Constraints min φ(x) subject to ` ≤ x ≤ u MINOS, SNOPT LANCELOT, L-BFGS-B „ x « LC Linear Constraints min φ(x) subject to ` ≤ ≤ u Ax MINOS, SNOPT, NPSOL 0 x 1 NC Nonlinear Constraints min φ(x) subject to ` ≤ @ Ax A ≤ u MINOS, SNOPT, NPSOL c(x) CONOPT, LANCELOT Filter, KNITRO, LOQO (second derivatives) IPOPT (open source solver [30]) Algorithms for finding local optima are used to construct algorithms for more complex optimization problems: stochastic, nonsmooth, global, mixed integer. -
PROPT a Linear Problem with Bang Bang Control
PROPT PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 21 Jan 2012 17:41:26 UTC Contents Articles PROPT 1 PROPT Optimal Control Examples 17 PROPT Acrobot 20 PROPT A Linear Problem with Bang Bang Control 22 PROPT Batch Fermentor 24 PROPT Batch Production 29 PROPT Batch Reactor Problem 34 PROPT The Brachistochrone Problem 36 PROPT The Brachistochrone Problem (DAE formulation) 38 PROPT Bridge Crane System 40 PROPT Bryson-Denham Problem 43 PROPT Bryson-Denham Problem (Detailed) 45 PROPT Bryson-Denham Problem (Short version) 47 PROPT Bryson-Denham Two Phase Problem 48 PROPT Bryson Maxrange 52 PROPT Catalyst Mixing 54 PROPT Catalytic Cracking of Gas Oil 57 PROPT Flow in a Channel 59 PROPT Coloumb Friction 1 61 PROPT Coloumb Friction 2 63 PROPT Continuous State Constraint Problem 66 PROPT Curve Area Maximization 68 PROPT Denbigh's System of Reactions 70 PROPT Dielectrophoresis Particle Control 74 PROPT Disturbance Control 76 PROPT Drug Displacement Problem 78 PROPT Optimal Drug Scheduling for Cancer Chemotherapy 81 PROPT Euler Buckling Problem 85 PROPT MK2 5-Link robot 87 PROPT Flight Path Tracking 89 PROPT Food Sterilization 92 PROPT Free Floating Robot 95 PROPT Fuller Phenomenon 101 PROPT Genetic 1 103 PROPT Genetic 2 105 PROPT Global Dynamic System 107 PROPT Goddard Rocket, Maximum Ascent 108 PROPT Goddard Rocket, Maximum Ascent, Final time free, Singular solution 112 PROPT Goddard Rocket, Maximum Ascent, Final time fixed, Singular solution 115 PROPT Greenhouse -
TOMLAB Quickguide
TOMLAB QUICK START GUIDE Marcus M. Edvall1 and Anders G¨oran2 February 28, 2009 1Tomlab Optimization Inc., 855 Beech St #121, San Diego, CA, USA, [email protected]. 2Tomlab Optimization AB, V¨aster˚asTechnology Park, Trefasgatan 4, SE-721 30 V¨aster˚as,Sweden, [email protected]. 1 Contents Contents 2 1 QuickGuide Overview 4 2 LP Problem 5 3 MILP Problem 7 4 QP Problem 9 5 MIQP Problem 11 6 MIQQ Problem 12 7 NLP Problem 14 8 LPCON Problem 16 9 QPCON Problem 18 10 MINLP Problem 20 11 LLS Problem 22 12 MILLS Problem 24 13 NLLS Problem 26 14 GLB Problem 28 15 GLC Problem 30 16 SDP Problem 32 17 BMI Problem 34 18 MINIMAX Problem 36 19 MINIMAXLIN Problem 38 20 L1 Problem 40 2 21 L1LIN Problem 42 22 LINRAT Problem 43 23 GOAL Problem 45 24 SIM Problem 47 25 GP Problem 49 26 LCP Problem 51 27 QCP Problem 54 28 MCP Problem 57 29 EXP Problem 61 30 QPBLOCK Problem 63 31 Binary Selection Problems 66 32 PIECE-WISE LINEAR Problem 68 33 MAD Problem 72 34 PROPT - Optimal Control 74 34.1 Description................................................. 74 34.2 Modeling.................................................. 74 34.3 Example................................................... 75 34.4 PROPT User's Guide........................................... 76 35 Important Information 77 35.1 Passing addition variables......................................... 77 35.2 Using Patterns............................................... 77 35.3 Solver Timings............................................... 78 35.4 Recursive Calls............................................... 78 35.5 Verifying Problems............................................. 79 35.6 Optimization Toolbox........................................... 79 35.7 Matlab functions.............................................. 79 3 1 QuickGuide Overview This guide will introduce you to the very basics associated with solving problems using the TOMLAB Optimization Environment. -
Constraint Control of Nonholonomic Mechanical Systems
Constraint Control of Nonholonomic Mechanical Systems Vakhtang Putkaradze and Stuart Rogers Department of Mathematical and Statistical Sciences, University of Alberta Edmonton, AB T6G 2G1 email: [email protected] (VP), [email protected] (SR) November 13, 2018 Abstract We derive an optimal control formulation for a nonholonomic mechanical system using the non- holonomic constraint itself as the control. We focus on Suslov’s problem, which is defined as the motion of a rigid body with a vanishing projection of the body frame angular velocity on a given direction ξ. We derive the optimal control formulation, first for an arbitrary group, and then in the classical realization of Suslov’s problem for the rotation group SO(3). We show that it is possible to control the system using the constraint ξ(t) and demonstrate numerical examples in which the system tracks quite complex trajectories such as a spiral. Contents 1 Introduction 2 arXiv:1610.02595v17 [math.OC] 10 Nov 2018 2 Background: Symmetry Reduction, Nonholonomic Constraints, and Optimal Control in Classical Mechanics 4 2.1 Symmetry Reduction and the Euler-Poincar´eEquation . ............... 4 2.2 Nonholonomic Constraints and Lagrange-d’Alembert’s Principle. 5 2.3 Optimal Control and Pontryagin’s Minimum Principle . .............. 6 3 Suslov’s Optimal Control Problem for an Arbitrary Group 8 3.1 Derivation of Suslov’s Pure Equations of Motion . ............. 8 3.2 Derivation of Suslov’s Optimally Controlled Equations ofMotion . 9 1 4 Suslov’s Optimal Control Problem for Rigid Body Motion 12 4.1 Derivation of Suslov’s Pure Equations of Motion . ............. 12 4.2 Controllability and Accessibility of Suslov’s Pure Equations of Motion . -
MS&E 318 (CME 338) Large-Scale Numerical Optimization Course Description Syllabus
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2015 Notes 1: Overview Course description The main algorithms and software for constrained optimization, emphasizing the sparse- matrix methods needed for their implementation. Iterative methods for linear equations and least squares. The simplex method. Basis factorization and updates. Interior methods for convex optimization. The reduced-gradient method, augmented Lagrangian methods, and SQP methods. Recommended: MS&E 310, 311, 312, 314, or 315 CME 108, 200, 302, 304, 334, or 335 3 units, Grading basis ABCD/NP, 4 or 5 homeworks (60%), 1 project (40%), no mid-term or final. Prerequisites: Basic numerical linear algebra, including LU, QR, and SVD factorizations, and an interest in Matlab, sparse-matrix methods, and gradient-based algorithms for con- strained optimization. http://stanford.edu/class/msande318/ Syllabus 1. Overview (problem types, NEOS, Matlab, TOMLAB) 2. Iterative methods for symmetric Ax = b (symmetric Lanczos process, CG, SYMMLQ, MINRES, MINRES-QLP) 3. Iterative methods for unsymmetric Ax = b and least squares (Golub-Kahan process, CGLS, LSQR, LSMR, Craig, Arnoldi process, GMRES) 4. The primal simplex method (phase 1 in practice, basis factorization, updating, crash, scaling, degeneracy) 5. Basis updates (Product-Form, Bartels-Golub, Forrest-Tomlin, Block-LU) 6. LUSOL: A Basis Factorization Package (the engine for MINOS, SQOPT, SNOPT, MILES, PATH, lp solve) 7. Primal-dual interior methods for LP (CPLEX, HOPDM, IPOPT, KNITRO, LOQO, MOSEK) and convex nonlinear objectives (PDCO), Basis Pursuit, BP Denoising (Lasso, LARS, Homotopy, BPdual) 8. The reduced-gradient method (MINOS part 1) 9.