Maxlik: Maximum Likelihood Estimation and Related Tools
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
An Augmented Lagrangian Method for Optimization Problems in Banach Spaces Arxiv:1807.04467V1 [Math.OC] 12 Jul 2018
An Augmented Lagrangian Method for Optimization Problems in Banach Spaces∗ Christian Kanzowy Daniel Stecky Daniel Wachsmuthy October 23, 2017 Abstract. We propose a variant of the classical augmented Lagrangian method for con- strained optimization problems in Banach spaces. Our theoretical framework does not require any convexity or second-order assumptions and allows the treatment of inequality constraints with infinite-dimensional image space. Moreover, we discuss the convergence properties of our algorithm with regard to feasibility, global optimality, and KKT conditions. Some numerical results are given to illustrate the practical viability of the method. Keywords. Constrained optimization, augmented Lagrangian method, Banach space, inequality constraints, global convergence. 1 Introduction Let X, Y be (real) Banach spaces and let f : X ! R, g : X ! Y be given mappings. The aim of this paper is to describe an augmented Lagrangian method for the solution of the constrained optimization problem min f(x) subject to (s.t.) g(x) ≤ 0: (P ) We assume that Y,! L2(Ω) densely for some measure space Ω, where the natural order on L2(Ω) induces the order on Y . A detailed description together with some remarks about this setting is given in Section 2. Augmented Lagrangian methods for the solution of optimization problems belong to the most famous and successful algorithms for the solution of finite-dimensional problems and are described in almost all text books on continuous optimization, see, e.g. [8, 29]. Their generalization to infinite-dimensional problems has received considerable attention throughout the last decades [6, 7, 16, 18, 20, 21, 22, 24, 25]. However, most existing arXiv:1807.04467v1 [math.OC] 12 Jul 2018 approaches either assume a very specific problem structure [6, 7], require strong convexity assumptions [18] or consider only the case where Y is finite-dimensional [20, 24]. -
Chapter 8 Constrained Optimization
Chapter 8 Constrained Optimization 8.1 Introduction This chapter presents the optimization problems where constraints play a role. The combination of the equality constraints, inequality constraints, and lower and upper bounds defines a feasible region. If the solution also minimizes (or maximizes) the objective function, it is a local optimal solution. Other local optimal solutions may exist in the feasible region, with one or more being a global optimal solution. When the objective function, equality constraints, and inequality constraints are linear with respect to the variables, the problem is called a linear programming (LP) problem. If the objective function, any of the equality constraints, and/or any of the inequality constraints are nonlinear with respect to the variables, the problem is called a nonlinear programming (NLP) problem. For a function with two independent variables, contours of the objective function can be plotted with one variable against the other as shown in Figure 3-11. The optimization problem is to minimize the objective function: 2 2 f(x) = y = 4x1 + 5x2 The unconstrained solutions are shown in Figure 8.1a where the entire region is feasible. The optimum solution is at x1 = 0 and x2 = 0. Figure 8.1b shows the constrained case with the inequality constraint: g(x) = 1 − x1 ≤ 0 or x1 ≥ 1 Unc onstrained Constrained 4 4 3 3 2 2 1 1 0 0 x2 x2 -1 -1 -2 -2 -3 -3 -4 -4 -4 -3 -2 -1 0 1 2 3 4 -4 -3 -2 -1 0 1 2 3 4 x1 x1 (a) (b) Figure 8.1-1 Two-variable optimization: (a) unconstrained; (b): constrained. -
An Augmented Lagrangian Method for Equality Constrained Optimization with Rapid Infeasibility Detection Capabilities Paul Armand, Ngoc Nguyen Tran
An augmented Lagrangian method for equality constrained optimization with rapid infeasibility detection capabilities Paul Armand, Ngoc Nguyen Tran To cite this version: Paul Armand, Ngoc Nguyen Tran. An augmented Lagrangian method for equality constrained op- timization with rapid infeasibility detection capabilities. [Research Report] Université de Limoges, France; XLIM. 2018. hal-01785365 HAL Id: hal-01785365 https://hal.archives-ouvertes.fr/hal-01785365 Submitted on 4 May 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. An augmented Lagrangian method for equality constrained optimization with rapid infeasibility detection capabilities Paul Armand · Ngoc Nguyen Tran April 30, 2018 Abstract We present a primal-dual augmented Lagrangian method for solv- ing an equality constrained minimization problem, which is able to rapidly detect infeasibility. The method is based on a modification of the algorithm proposed in [1]. A new parameter is introduced to scale the objective function and, in case of infeasibility, to force the convergence of the iterates to an infea- sible stationary point. It is shown, under mild assumptions, that whenever the algorithm converges to an infeasible stationary point, the rate of convergence is quadratic. This is a new convergence result for the class of augmented La- grangian methods. -
Improving Ultimate Convergence of an Augmented Lagrangian Method
Improving ultimate convergence of an Augmented Lagrangian method E. G. Birgin ∗ J. M. Mart´ınez † June 12, 2007. Updated March 19, 2008 Abstract Optimization methods that employ the classical Powell-Hestenes-Rockafellar Augmented Lagrangian are useful tools for solving Nonlinear Programming problems. Their reputation decreased in the last ten years due to the comparative success of Interior-Point Newto- nian algorithms, which are asymptotically faster. In the present research a combination of both approaches is evaluated. The idea is to produce a competitive method, being more robust and efficient than its “pure” counterparts for critical problems. Moreover, an addi- tional hybrid algorithm is defined, in which the Interior Point method is replaced by the Newtonian resolution of a KKT system identified by the Augmented Lagrangian algorithm. The software used in this work is freely available through the Tango Project web page: http://www.ime.usp.br/∼egbirgin/tango/. Key words: Nonlinear programming, Augmented Lagrangian methods, Interior-Point meth- ods, Newton’s method, experiments. 1 Introduction We are concerned with Nonlinear Programming problems defined in the following way: Minimize f(x) subject to h(x) = 0 (1) g(x) ≤ 0 x ∈ Ω, where h : IRn → IRm, g : IRn → IRp, f : IRn → IR are smooth and Ω ⊂ IRn is an n-dimensional box. Namely, Ω = {x ∈ IRn | ` ≤ x ≤ u}. ∗Department of Computer Science IME-USP, University of S˜ao Paulo, Rua do Mat˜ao1010, Cidade Uni- versit´aria,05508-090, S˜aoPaulo SP, Brazil. This author was supported by PRONEX-Optimization (PRONEX - CNPq / FAPERJ E-26 / 171.164/2003 - APQ1), FAPESP (Grant 06/53768-0) and CNPq (PROSUL 490333/2004- 4). -
First-Order Augmented Lagrangian Methods for State Constraint Problems
Master's thesis First-order augmented Lagrangian methods for state constraint problems submitted by Jonas Siegfried Jehle at the Faculty of Sciences Department of Mathematics and Statistics and the Department of Mathematics German Supervisor and 1st Reviewer: Prof. Dr. Stefan Volkwein, University of Konstanz Chinese Supervisor and 2nd Reviewer: Prof. Ph.D. Jinyan Fan, Shanghai Jiao Tong University Konstanz, 6 September 2018 Konstanzer Online-Publikations-System (KOPS) URL: http://nbn-resolving.de/urn:nbn:de:bsz:352-2-1q60suvytbn4t1 Declaration of Authorship I hereby declare that the submitted thesis First-order augmented Lagrangian methods for state constraint problems is my own unaided work. All direct or indirect sources used are acknowledged as references. I am aware that the thesis in digital form can be examined for the use of unauthorized aid and in order to determine whether the thesis as a whole or parts incorporated in it may be deemed as plagiarism. For the comparison of my work with existing sources I agree that it shall be entered in a database where it shall also remain after examination, to enable comparison with future theses submitted. Further rights of reproduction and usage, however, are not granted here. This paper was not previously presented to another examination board and has not been published. Konstanz, 6 September 2018 ........................................................... Jonas Siegfried Jehle i Abstract In this thesis, the augmented Lagrangian method is used to solve optimal boundary control problems. The considered optimal control problem appears in energy efficient building operation and consists of a heat equation with convection along with bilateral control and state constraints. -
A Tutorial of AMPL for Linear Programming
A Tutorial of AMPL for Linear Programming Hongwei Jin May, 2014 Contents 1 Introduction 1 1.1 AMPL............................................1 1.2 Solvers............................................2 1.3 Install and Run.......................................4 2 AMPL Syntax 5 2.1 Model Declaration.....................................5 2.2 Data Assignment......................................9 2.3 Solve............................................. 10 2.4 Display........................................... 10 2.5 Advanced AMPL Feature................................. 12 3 Example: Capacity Expansion Problem 13 3.1 Problem Definition..................................... 13 3.2 Model Definition...................................... 15 3.3 Transfer into AMPL.................................... 16 4 Summarization 17 Reference 18 1 Introduction 1.1 AMPL AMPL is a comprehensive and powerful algebraic modeling language for linear and nonlinear op- timization problems, in discrete or continuous variables. Developed at Bell Laboratories, AMPL lets you use common notation and familiar concepts to formulate optimization models and exam- ine solutions, while the computer manages communication with an appropriate solver. AMPL's flexibility and convenience render it ideal for rapid prototyping and model development, while its speed and control options make it an especially efficient choice for repeated production runs. According to the statistics of NEOS [1], AMPL is the most commonly used mathematical modeling language submitted to the server. Figure 1: NEOS -
Math 407 — Linear Optimization 1 Introduction
Math 407 — Linear Optimization 1 Introduction 1.1 What is optimization? A mathematical optimization problem is one in which some function is either maximized or minimized relative to a given set of alternatives. The function to be minimized or maximized is called the objective function and the set of alternatives is called the feasible region (or constraint region). In this course, the feasible region is always taken to be a subset of Rn (real n-dimensional space) and the objective function is a function from Rn to R. We further restrict the class of optimization problems that we consider to linear program- ming problems (or LPs). An LP is an optimization problem over Rn wherein the objective function is a linear function, that is, the objective has the form c x + c x + + c x 1 1 2 2 ··· n n for some ci R i =1,...,n,andthefeasibleregionisthesetofsolutionstoafinitenumber of linear inequality2 and equality constraints, of the form a x + a x + + a x b i =1,...,s i1 i i2 2 ··· in n i and a x + a x + + a x = b i = s +1,...,m. i1 i i2 2 ··· in n i Linear programming is an extremely powerful tool for addressing a wide range of applied optimization problems. A short list of application areas is resource allocation, produc- tion scheduling, warehousing, layout, transportation scheduling, facility location, flight crew scheduling, portfolio optimization, parameter estimation, . 1.2 An Example To illustrate some of the basic features of LP, we begin with a simple two-dimensional example. In modeling this example, we will review the four basic steps in the development of an LP model: 1. -
Nonlinear Constrained Optimization: Methods and Software
ARGONNE NATIONAL LABORATORY 9700 South Cass Avenue Argonne, Illinois 60439 Nonlinear Constrained Optimization: Methods and Software Sven Leyffer and Ashutosh Mahajan Mathematics and Computer Science Division Preprint ANL/MCS-P1729-0310 March 17, 2010 This work was supported by the Office of Advanced Scientific Computing Research, Office of Science, U.S. Department of Energy, under Contract DE-AC02-06CH11357. Contents 1 Background and Introduction1 2 Convergence Test and Termination Conditions2 3 Local Model: Improving a Solution Estimate3 3.1 Sequential Linear and Quadratic Programming......................3 3.2 Interior-Point Methods....................................5 4 Globalization Strategy: Convergence from Remote Starting Points7 4.1 Augmented Lagrangian Methods..............................7 4.2 Penalty and Merit Function Methods............................8 4.3 Filter and Funnel Methods..................................9 4.4 Maratos Effect and Loss of Fast Convergence....................... 10 5 Globalization Mechanisms 11 5.1 Line-Search Methods..................................... 11 5.2 Trust-Region Methods.................................... 11 6 Nonlinear Optimization Software: Summary 12 7 Interior-Point Solvers 13 7.1 CVXOPT............................................ 13 7.2 IPOPT.............................................. 13 7.3 KNITRO............................................ 15 7.4 LOQO............................................. 15 8 Sequential Linear/Quadratic Solvers 16 8.1 CONOPT........................................... -
Nonlinear Programming
CHAPTER 1 NONLINEAR PROGRAMMING “Since the fabric of the universe is most perfect, and is the work of a most wise Creator, nothing whatsoever takes place in the universe in which some form of maximum and minimum does not appear.” —Leonhard Euler 1.1 INTRODUCTION In this chapter, we introduce the nonlinear programming (NLP) problem. Our purpose is to provide some background on nonlinear problems; indeed, an exhaustive discussion of both theoretical and practical aspects of nonlinear programming can be the subject matter of an entire book. There are several reasons for studying nonlinear programming in an optimal control class. First and foremost, anyone interested in optimal control should know about a number of fundamental results in nonlinear programming. As optimal control problems are optimiza- tion problems in (infinite-dimensional) functional spaces, while nonlinear programming are optimization problems in Euclidean spaces, optimal control can indeed be seen as a generalization of nonlinear programming. Second and as we shall see in Chapter 3, NLP techniques are used routinely and are particularly efficient in solving optimal control problems. In the case of a discrete control problem, i.e., when the controls are exerted at discrete points, the problem can be directly stated as a NLP problem. In a continuous control problem, on the other hand, i.e., when Nonlinear and Dynamic Optimization: From Theory to Practice. By B. Chachuat 1 2007 Automatic Control Laboratory, EPFL, Switzerland 2 NONLINEAR PROGRAMMING the controls are functions to be exerted over a prescribed planning horizon, an approximate solution can be found by solving a NLP problem. -
Adaptive Augmented Lagrangian Methods: Algorithms and Practical Numerical Experience
Adaptive Augmented Lagrangian Methods: Algorithms and Practical Numerical Experience Frank E. Curtis Department of Industrial and Systems Engineering Lehigh University, Bethlehem, PA, USA Nicholas I. M. Gould STFC-Rutherford Appleton Laboratory Numerical Analysis Group, R18, Chilton, OX11 0QX, UK Hao Jiang and Daniel P. Robinson Department of Applied Mathematics and Statistics Johns Hopkins University, Baltimore, MD, USA COR@L Technical Report 14T-06-R1 Adaptive Augmented Lagrangian Methods: Algorithms and Practical Numerical Experience Frank E. Curtis†, Nicholas I. M. Gould‡, Hao Jiang§, and Daniel P. Robinson§ In this paper, we consider augmented Lagrangian (AL) algorithms for solving large-scale nonlinear optimization problems that execute adaptive strategies for updating the penalty parameter. Our work is motivated by the recently proposed adaptive AL trust region method by Curtis, Jiang, and Robinson [Math. Prog., DOI: 10.1007/s10107-014-0784-y, 2013]. The first focal point of this paper is a new variant of the approach that employs a line search rather than a trust region strategy, where a critical algorithmic feature for the line search strategy is the use of convexified piecewise quadratic models of the AL function for computing the search directions. We prove global convergence guarantees for our line search algorithm that are on par with those for the previously proposed trust region method. A second focal point of this paper is the practical performance of the line search and trust region algorithm variants in Matlab software, as well as that of an adaptive penalty parameter updating strategy incorporated into the Lancelot software. We test these methods on problems from the CUTEst and COPS collections, as well as on challenging test problems related to optimal power flow.