Parameter Optimization: Constrained

Total Page:16

File Type:pdf, Size:1020Kb

Parameter Optimization: Constrained ECE5570: Optimization Methods for Systems and Control 4–1 Parameter Optimization: Constrained Many of the concepts which arise in unconstrained parameter optimization are also important in the study of constrained optimization, so we will build on the material presented in Chapter 3. Unlike unconstrained optimization, however, it is more difficult to generate consistent numerical results; hence the choice of a suitable algorithm is often more challenging for the class of constrained problems. The general form of most constrained optimization problems can be ! expressed as: n min L.u/ u R 2 subject to ci .u/ 0; i E D 2 ci .u/ 0; i I " 2 where E and I denote the set of equality and inequality constraints, respectively. Any point u0 satisfying all the contraints is said to be a feasible point; ! the set of all such points defines the feasible region, R. Let’s consider an example for the n 2 case where L.u/ u1 u2 D D C subject to constraints: 2 c1.u/ u2 u D # 1 2 2 c2.u/ 1 u u D # 1 # 2 This situation is depicted in Fig 4.1 below. Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–2 Figure 4.1: Constrained Optimization Example Equality Constraints: Two-Parameter Problem Consider the following two-parameter problem: ! T – Find the parameter vector u u u to minimize L.u/ subject D 1 2 to the constraint c.u/ ! h i D – Question: why must there be at least two parameters in this problem? From the previous chapter, we know that the change in L caused by ! changes in the parameters u1and u2 in a neighborhood of the optimal solution is given approximately by: @L @L 1 @2L L u u uT u 1 2 2 4 % @u1 4 C @u2 4 C 24 @u 4 ˇ ˇ Ä " ˇ& ˇ& – but as we changeˇ.u1;u2/ we mustˇ ensure that the constraint ˇ ˇ remains satisfied – to first order, this requirement means that @c @c c u1 u2 0 4 D @u 4 C @u 4 D 1 ˇ 2 ˇ ˇ& ˇ& Lecture notes prepared by M. Scott Trimboli. Copyright cˇ 2013, M. Scott Trimboliˇ $ˇ ˇ ECE5570, Parameter Optimization: Constrained 4–3 – so u1 (or u2 ) is not arbitrary; it depends on u2 (or u1 ) 4 4 4 4 according to the following relationship: @c =@u2 u1 j& u2 @c 4 D# =@u1 4 # j& $ ) @c @L @L =@u2 L u2 @c 4 % @u2 # @u1 =@u1 4 # Ä "$& 2 2 @c 2 @c 2 1 @ L @ L =@u2 @ L =@u2 2 2 u2 C2 @u2 # @u @u @c=@u C @u2 @c=@u 4 ( 2 1 2 Ä 1 " 1 Ä 1 " ) & – but, u2 is arbitrary; so how do we find the solution? 4 coefficient of first-order term must be zero ı coefficient of second-order term must be greater than or equal ı to zero So, we can solve this problem by solving ! @L @L @c=@u 2 0 @u # @u @c=@u D 2 ˇ 1 Ä 1 " ˇ& – but this is only one equationˇ in two unknowns; where do we get the ˇ rest of the information required to solve this problem? from the constraint: ı c.u1;u2/ ! D Although the approach outlined above is straightforward for the ! 2-parameter and 1- constraint problem, it becomes very difficult to implement as the dimensions of the problem increase For this reason, it would be nice to develop an alternative approach ! that can be extended to more difficult problems Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–4 Is this possible? ! YESŠ ) Lagrange Multipliers: Two-Parameter Problem Consider the following cost function: ! L L.u1;u2/ # c.u1;u2/ ! N D C f # g where # is a constant that I am free to select – since c ! 0, this cost function will be minimized at precisely the # D same points as L.u1;u2/ – so, a necessary condition for a local minimum of L.u1;u2/ is: L0 L # c 0 4 D4 C 4 D @L @c @L @c # u1 # u2 0 @u C @u 4 C @u C @u 4 D # 1 1 $ # 2 2 $ but because of the constraint, u1 and u2 are not ı 4 4 independent; so it is conceivable that this result could be true even if the two coefficients are not zero. however, # is free to be chosen: ı @L =@u1 Let # # @c D =@u1 @L @c then # 0 @u2 C @u2 D and c.u1;u2/ 0 D result 3 equations, 3 unknowns ı ) Comments: ! – Is this a new result? No Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–5 if you substute the expression for # into the second equation, ı you get the same result as developed previously – So why do it? @L @c # 0 @u1 C @u1 D @L @c # 0 @u2 C @u2 D these two equations are precisely the ones I would have ı obtained if I had assumed u1 and u2 were independent 4 4 so the use of Lagrange multipliers allows me to develop the ı necessary conditions for a minimum using standard unconstrained parameter optimization techniques, which is a significant simplification for complicated problems Example 4.1 Determine the rectangle of maximum area that be inscribed inside a ! circle of radius R. Figure 4.2: Example - Constrained Optimization Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–6 Generate objective function, L.x; y/: ! A 4xy L 4xy D ) D# Formulate constraint equation, c.x; y/: ! c.x; y/ x2 y2 R2 D C D Calculate solution ) @L 4y @x D# @c 2x @x D @L 4x @y D# @c 2y @y D x2 y2 R2 C D 4x 4y.2y=2x/ 0 # C D ) x2 y2 R2 C D x2 y2 0 # D R 2 x y A& 2R ) D D p2 ) D Lagrange multiplier ! x2 y2 R2 C D 4y #2x 0 # C D 4x #2y 0 # C D Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–7 ) y # 2 D x y2 4x 4 0 x2 y2 0 )# C x D ) # D ) R 2 x y A& 2R D D p2 ) D # 2 D Is the fact that # 2 important? ! D Multi-Parameter Problem The same approach used for the two-parameter problem can also be ! applied to the multi-parameter problem But now for convenience, we’ll adopt vector notation: ! – Cost Function: L.x; u/ – Constraints: c.x; u/ !, where c is dimension n 1 D ' – by convention, x will denote the set of dependent variables and u will denote the set of independent variables – how many dependent variables are there? n (why?) x .n 1/ u .m 1/ ) Á ' Á ' Method of First Variations: ! @L @L L x u 4 % @x 4 C @u 4 @c @c c x u 4 % @x4 C @u4 Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–8 1 @c # @c x u )4 D# @x @u4 # $ – NOTE: x must be selected so that @c=@x is nonsingular; for well-posed systems, this can always be done. – using this expression for x, the necessary conditions for a 4 minimum are: 1 @L @L @c # @c 0 @u # @x @x @u D # $ c.x; u/ ! D giving .m n/ equations in .m n/ unknowns. C C Lagrange Multipliers: ! L L.x; u/ "T c.x; u/ ! N D C f # g where " is now a vector of parameters that I am free to pick. @L T @c @L T @c T L0 " x " u c.x; u/ ! " 4 N D @x C @x 4 C @u C @u 4 C f # g 4 # $ # $ – and because I’ve introduced these Lagrange multipliers, I can treat all of the parameters as if they were independent. – so necessary conditions for a minimum are: Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–9 @L @L @c N "T 0 @x D @x C @x D @L @L @c N "T 0 @u D @u C @u D T @L N c.x; u/ ! 0 (@") D # D which gives .2n m/ equations in .2n m/ unknowns. C C NOTE: solving the first equation for " and substituting this result into ! the second equation yields the same result as that derived using the Method of First Variations. Using Lagrange multipliers, we transform the problem from one of ! minimizing L subject to c ! to one of minimizing L without D N constraints. Sufficient Conditions for a Local Minimum Using Lagrange multipliers, we can develop sufficient conditions for a ! minimum by expanding L to second order: 4 N 1 L L x L L x L u T T N xx N xu N N x N u x u 4 4 D 4 C 4 C 2 4 4 " Lux Luu #" u # h i N N 4 where @2L @2L Lxx N Luu N N D @x2 N D @u2 @ @L T Lxu Lux N D @u @x D N # $ Lecture notes prepared by M. Scott Trimboli. Copyright c 2013, M. Scott Trimboli $ ECE5570, Parameter Optimization: Constrained 4–10 – for a stationary point, Lx Lu 0 N D N D and 1 x c# cu u h:o:t: 4 D# x 4 C – substituting this expression into L and neglecting terms higher 4 N than second order yields: 1 1 L L c# c L uT T T N xx N xu x u u N cu cx# Im m # 4 D 24 # ' " Lux Luu #" Im m # 4 h i N N ' 1 T L u L& u 4 N D 24 N uu4 where 1 T T T T 1 L& Luu Luxc# cu c c# Lxu c c# Lxxc# cu N uu D N # N x # u x N C u x N x – so, a sufficient condition for a local minimum is that L Š N uu& is positive definite Example 4.2 Returning to the previous problem of a rectangle within a circle, we ! have Lxx 2# Lyx 4cx 2x N D N D# D Lxy 4 Lyy 2#cy 2y N D# N D D Let y be the independent variable: ! 4y 4y 2#y2 L& 2# # # N yy D # x # x C x2 R At the minimum (?), x y # 2 ! D D p2 D L& 16 > 0 ) N yy D Lecture notes prepared by M.
Recommended publications
  • Constrained Optimization 5
    Constrained Optimization 5 Most problems in structural optimization must be formulated as constrained min- imization problems. In a typical structural design problem the objective function is a fairly simple function of the design variables (e.g., weight), but the design has to satisfy a host of stress, displacement, buckling, and frequency constraints. These constraints are usually complex functions of the design variables available only from an analysis of a finite element model of the structure. This chapter offers a review of methods that are commonly used to solve such constrained problems. The methods described in this chapter are for use when the computational cost of evaluating the objective function and constraints is small or moderate. In these meth- ods the objective function or constraints these are calculated exactly (e.g., by a finite element program) whenever they are required by the optimization algorithm. This approach can require hundreds of evaluations of objective function and constraints, and is not practical for problems where a single evaluation is computationally ex- pensive. For these more expensive problems we go through an intermediate stage of constructing approximations for the objective function and constraints, or at least for the more expensive functions. The optimization is then performed on the approx- imate problem. This approximation process is described in the next chapter. The basic problem that we consider in this chapter is the minimization of a function subject to equality and inequality constraints minimize f(x) such that hi(x) = 0, i = 1, . , ne , (5.1) gj(x) ≥ 0, j = 1, . , ng . The constraints divide the design space into two domains, the feasible domain where the constraints are satisfied, and the infeasible domain where at least one of the constraints is violated.
    [Show full text]
  • Tabu Search for Constraint Satisfaction
    Tabu Search for Constraint Solving and Its Applications Jin-Kao Hao LERIA University of Angers 2 Boulevard Lavoisier 49045 Angers Cedex 01 - France 1. Introduction The Constraint Satisfaction Problem (CSP) is a very general problem able to conveniently formulate a wide range of important applications in combinatorial optimization. These include well-known classical problems such as graph k-coloring and satisfiability, as well as many practical applications related to resource assignments, planning and timetabling. The feasibility problem of Integer Programming can also be easily represented by the CSP formalism. As elaborated below, we provide case studies for specific CSP applications, including the important realm of frequency assignment, and establish that tabu search is the best performing approach for these applications. The Constraint Satisfaction Problem is defined by a triplet (X, D, C) where: – X = {x1,· · ·,xn} is a finite set of n variables. – D = {Dx1 ,· · ·,Dxn} is a set of associated domains. Each domain Dxi specifies the finite set of possible values of the variable xi. – C = {C1,· · ·,Cp} is a finite set of p constraints. Each constraint is defined on a set of variables and specifies which combinations of values are compatible for these variables. Given such a triplet, the basic problem consists in finding a complete assignment of the values to the variables such that that all the constraints are satisfied. Such an assignment is called a solution to the given CSP instance. Since the set of all assignments is given by the Cartesian product Dx1 ×· · ·×Dxn, solving a CSP means to determine a particular assignment among a potentially huge search space.
    [Show full text]
  • Constrained Quadratic Optimization: Theory and Application for Wireless Communication Systems
    Constrained Quadratic Optimization: Theory and Application for Wireless Communication Systems by Erik Hons A thesis presented to the University of Waterloo in ful¯lment of the thesis requirement for the degree of Master of Applied Science in Electrical and Computer Engineering Waterloo, Ontario, Canada, 2001 c Erik Hons 2001 ° I hereby declare that I am the sole author of this thesis. I authorize the University of Waterloo to lend this thesis to other institutions or individuals for the purpose of scholarly research. I further authorize the University of Waterloo to reproduce this thesis by pho- tocopying or by other means, in total or in part, at the request of other institutions or individuals for the purpose of scholarly research. ii The University of Waterloo requires the signatures of all persons using or pho- tocopying this thesis. Please sign below, and give address and date. iii Abstract The suitability of general constrained quadratic optimization as a modeling and solution tool for wireless communication is discussed. It is found to be appropriate due to the many quadratic entities present in wireless communication models. It is posited that decisions which a®ect the minimum mean squared error at the receiver may achieve improved performance if those decisions are energy constrained. That theory is tested by applying a speci¯c type of constrained quadratic optimization problem to the forward link of a cellular DS-CDMA system. Through simulation it is shown that when channel coding is used, energy constrained methods typi- cally outperform unconstrained methods. Furthermore, a new energy-constrained method is presented which signi¯cantly outperforms a similar published method in several di®erent environments.
    [Show full text]
  • Signomial Programs with Equality Constraints
    SIGNOMIAL PROGRAMS WITH EQUALITY CONSTRAINTS: NUMERICAL SOLUTION AND APPLICATIONS by JAMES YAN B.A.Sc., University of British Columbia, Vancouver, 1969 -M.A.Sc., University of British Columbia, Vancouver, 1971 •A THESIS SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY in the Department of Electrical Engineering We accept this thesis as conforming to the required standard THE UNIVERSITY OF BRITISH COLUMBIA December 1976 © James Yan, 1976 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the Library shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the Head of my Department or by his representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission. Depa rtment The University of British Columbia 2075 Wesbrook Place Vancouver, Canada V6T 1W5 Date ABSTRACT Many design problems from diverse engineering disciplines can be formulated as signomial programs with both equality and inequa• lity constraints. However, existing computational methods of signomial programming are applicable to programs with inequality constraints only. In this thesis an algorithm is proposed and implemented to solve signo• mial programs with mixed inequality and equality constraints. The algo• rithm requires no manipulation of the constraints by the user and is compatible with the results of signomial programming. The proposed algorithm is a synthesis shaped by three concepts: the method of multipliers viewed as a primal-dual method, partial dua- lization,.and the retention of the structure of a signomial program.
    [Show full text]
  • A Fast Algorithm for Maximum Likelihood Estimation of Mixture Proportions Using Sequential Quadratic Programming
    A Fast Algorithm for Maximum Likelihood Estimation of Mixture Proportions Using Sequential Quadratic Programming Youngseok Kim∗, Peter Carbonetto†yz, Matthew Stephens∗‡, Mihai Anitescu∗§ December 10, 2020 Abstract Maximum likelihood estimation of mixture proportions has a long history, and continues to play an important role in modern statistics, including in development of nonparametric empirical Bayes methods. Maximum likelihood of mixture proportions has traditionally been solved using the expectation maximization (EM) algorithm, but recent work by Koenker & Mizera shows that modern convex optimization techniques|in particular, interior point methods|are substantially faster and more accurate than EM. Here, we develop a new solution based on sequential quadratic programming (SQP). It is substantially faster than the interior point method, and just as accurate. Our approach combines several ideas: first, it solves a reformulation of the original problem; second, it uses an SQP approach to make the best use of the expensive gradient and Hessian computations; third, the SQP iterations are implemented using an active set method to exploit the sparse nature of the quadratic subproblems; fourth, it uses accurate low-rank approximations for more efficient gradient and Hessian computations. We illustrate the benefits of the SQP approach in experiments on synthetic data sets and a large genetic association data set. In large data sets (n ≈ 106 observations, m ≈ 103 mixture components), our implementation achieves at least 100-fold reduction in
    [Show full text]
  • Optimality and Complexity for Constrained Optimization Problems with Nonconvex Regularization
    Optimality and complexity for constrained optimization problems with nonconvex regularization Wei Bian Department of Mathematics, Harbin Institute of Technology, Harbin, China, [email protected] Xiaojun Chen Department of Applied Mathematics, The Hong Kong Polytechnic University, Hong Kong, China, [email protected] In this paper, we consider a class of constrained optimization problems where the feasible set is a general closed convex set and the objective function has a nonsmooth, nonconvex regularizer. Such regularizer includes widely used SCAD, MCP, logistic, fraction, hard thresholding and non-Lipschitz Lp penalties as special cases. Using the theory of the generalized directional derivative and the Clarke tangent cone, we derive a first order necessary optimality condition for local minimizers of the problem, and define the generalized stationary point of it. The generalized stationary point is the Clarke stationary point when the objective function is Lipschitz continuous at this point, and the scaled stationary point when the objective function is not Lipschitz continuous at this point. We prove the consistency between the generalized directional derivative and the limit of the classic directional derivatives associated with the smoothing function. Moreover we show that finding a global minimizer of such optimization problems is strongly NP-hard and establish positive lower bounds for the absolute value of nonzero entries in every local minimizer of the problem if the regularizer is concave in an open set. Key words : Constrained nonsmooth nonconvex optimization; directional derivative consistency; optimality condition; lower bound theory; complexity MSC2000 subject classification : Primary: 49K35, 90C26; secondary: 65K05, 90C46 1. Introduction In this paper, we consider the following constrained optimization problem min f(x) := Θ(x) + c(h(x)); (1) x2X where Θ : Rn ! R and c : Rm ! R are continuously differentiable, h : Rn ! Rm is continuous, and X ⊂ Rn is a nonempty closed convex set.
    [Show full text]
  • Constraint Programming for Scheduling John J
    University of Dayton eCommons Department of Management Information Systems, MIS/OM/DS Faculty Publications Operations Management, and Decision Sciences 2004 Constraint Programming for Scheduling John J. Kanet University of Dayton, [email protected] Sanjay L. Ahire University of Dayton Michael F. Gorman University of Dayton, [email protected] Follow this and additional works at: http://ecommons.udayton.edu/mis_fac_pub Part of the Business Administration, Management, and Operations Commons, and the Operations and Supply Chain Management Commons eCommons Citation Kanet, John J.; Ahire, Sanjay L.; and Gorman, Michael F., "Constraint Programming for Scheduling" (2004). MIS/OM/DS Faculty Publications. Paper 1. http://ecommons.udayton.edu/mis_fac_pub/1 This Book Chapter is brought to you for free and open access by the Department of Management Information Systems, Operations Management, and Decision Sciences at eCommons. It has been accepted for inclusion in MIS/OM/DS Faculty Publications by an authorized administrator of eCommons. For more information, please contact [email protected], [email protected]. 47 Constraint Programming for Scheduling 47.l Jntroduction ......................... ........ ...... 47-1 47.2 What is Constraint Programming (CP)? ... .. 47-2 Constraint Satisfaction Problem 47.3 How Does Constrain t Programming Solve a CSP? ... 47-3 A General Algorithm for CSP • Constraint Propagation • Branching • Adapting the SP Algo ri thm to Constra ined Optimiza ti on Problems 47 .4 An Illustrati on of CP Formulation and Solution Logic ....... .. .... .. .. ....... .. ... 47-5 4 7.5 Selected CP Applica tions in the Scheduling Literature .. ... ..... .. .. .. 47-7 Job Shop Scheduling • Si ngle-Machine Sequencing • Pa rallel Machine Scheduling • Timetabling • Vehicle Dispatching • In tegration of the CP and IP Paradi gms 47.6 The Richness of CP for Modeling Scheduling Problems .......
    [Show full text]
  • Mathematical Optimization Techniques
    MATHEMATICAL OPTIMIZATION TECHNIQUES S. Russenschuck CERN, 1211 Geneva 23, Switzerland Abstract From the beginning the ROXIE program was structured such that mathemati- cal optimization techniques can be applied to the design of the superconduct- ing magnets. With the concept of features it is possible to create the complex coil assemblies in 2 and 3 dimensions with only a small number of engineer- ing data which can then be addressed as design variables of the optimization problem. In this chapter some background information on the application of mathematical optimization techniques is given. 1 Historical overview Mathematical optimization including numerical techniques such as linear and nonlinear programming, integer programming, network flow theory and dynamic optimization has its origin in operations research developed in world war II, e.g., Morse and Kimball 1950 [45]. Most of the real-world optimization problems involve multiple conflicting objectives which should be considered simultaneously, so-called vector-optimization problems. The solution process for vector-optimization problems is threefold, based on decision-making methods, methods to treat nonlinear constraints and optimization algorithms to min- imize the objective function. Methods for decision-making, based on the optimality criterion by Pareto in 1896 [48], have been introduced and applied to a wide range of problems in economics by Marglin 1966 [42], Geoffrion 1968 [18] and Fandel 1972 [12]. The theory of nonlinear programming with constraints is based on the opti- mality criterion by Kuhn and Tucker, 1951 [37]. Methods for the treatment of nonlinear constraints have been developed by Zoutdendijk 1960 [70], Fiacco and McCormick 1968 [13] and Rockafellar 1973 [54] among others.
    [Show full text]
  • Introduction to Design Optimization
    Introduction to Design Optimization Various Design Objectives Minimum Weight (under Allowable Stress) A PEM Fuel Cell Stack with Even Compression over Active Area (Minimum Stress Difference) Minimum Maximum Stress in the Structure Optimized Groove Dimension to Avoid Stress Concentration or Weakening of the Structure Engineering Applications of Optimization • Design - determining design parameters that lead to the best “performance” of a mechanical structure, device, or system. “Core of engineering design, or the systematic approach to design” (Arora, 89) • Planning – production planning - minimizing manufacturing costs – management of financial resources - obtaining maximum profits – task planning (robot, traffic flow) - achieving best performances • Control and Manufacturing - identifying the optimal control parameters for the best performance (machining, trajectory, etc.) • Mathematical Modeling - curve and surface fitting of given data with minimum error Commonly used tool: OPT function in FEA; MATLAB Optimization Toolbox What are common aspects in optimization problems? • There are multiple solutions to the problem; and the optimal solution is to be identified. • There exist one or more objectives to accomplish and a measure of how well these objectives are accomplished (measurable performance). • Constraints of different forms (hard, soft) are imposed. • There are several key influencing variables. The change of their values will influence (either improve or worsen) the “measurable performance” and the degree of violation of the “constraints.” Solution Methods • Optimization can provide either – a closed-form solution, or – a numerical solution. • Numerical optimization systematically and efficiently adjusts the influencing variables to find the solution that has the best performance, satisfying given constraints. • Frequently, the design objective, or cost function cannot be expressed in the form of simple algebra.
    [Show full text]
  • Introduction to Global Optimization
    Introduction to Global Optimization Leo Liberti LIX, Ecole´ Polytechnique, Palaiseau F-91128, France ([email protected]) October 23, 2008 Abstract Accurate modelling of real-world problems often requires nonconvex terms to be introduced in the model, either in the objective function or in the constraints. Nonconvex programming is one of the hardest fields of optimization, presenting many challenges in both practical and theoretical aspects. The presence of multiple local minima calls for the application of global optimization techniques. This paper is a mini-course about global optimization techniques in nonconvex programming; it deals with some theoretical aspects of nonlinear programming as well as with some of the current state- of-the-art algorithms in global optimization. The syllabus is as follows. Some examples of Nonlinear Programming Problems (NLPs). General description of two-phase algorithms. Local optimization of NLPs: derivation of KKT conditions. Short notes about stochastic global multistart algorithms with a concrete example (SobolOpt). In-depth study of a deterministic spatial Branch-and-Bound algorithm, and convex relaxation of an NLP. Latest advances in bilinear programming: the theory of reduction constraints. Contents 1 Introduction 2 1.1 Scope of this paper . ... 3 1.2 Examples which require Global Optimization . ...... 4 1.3 A brief history of global optimization . ......... 6 2 The structure of Global Optimization Algorithms 8 2.1 Stochastic global phase . ...... 9 2.2 Deterministic global phase . ...... 11 2.2.1 Fathoming ....................................... 13 2.3 Example of solution by Branch-and-Select . 13 3 Local Optimization of NLPs 16 3.1 Fundamental notions of convex analysis . ..... 16 3.2 Necessary and sufficient conditions for local optimality .
    [Show full text]
  • Branch-And-Cut for Cardinality Optimization by Ankit Sikka
    Branch-and-Cut for Cardinality Optimization by Ankit Sikka, B.S.E.E. A Thesis In INDUSTRIAL ENGINEERING Submitted to the Graduate Faculty of Texas Tech University in Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE In INDUSTRIAL ENGINEERING Approved Ismael R. de-Farias JR. Chair Milton Smith John Kobza Ralph Ferguson Dean of Graduate School December, 2010 c 2010, Ankit Sikka Texas Tech University, Ankit Sikka, December 2010 ACKNOWLEDGEMENTS My utmost gratitude goes to my advisor, Dr. Ismael de Farias, whose encourage- ment, guidance and support from the initial to the final level enabled me to develop an understanding of the subject. This work would have been impossible without his guidance. He is not only an outstanding professor with wide knowledge and extraor- dinary academic achievements, but also a great person. I could not have imagined having a better advisor and mentor for my master study. Besides my advisor, I would like to thank my thesis committee members: Dr. Milton Smith and Dr. John Kobza, for their encouragement, help and insightful comments. I offer my regards and blessings to all of those who supported me in any respect during the completion of the project. Finally, I want to give my best appreciation to my parents and friends for always being there for me, and for their understandings and supports for me in completing this thesis. Whenever thinking of them, I gained courage to continue my education, even when I was discouraged. ii Texas Tech University, Ankit Sikka, December 2010 TABLE OF CONTENTS Acknowledgements . ii Abstract .
    [Show full text]
  • Appendix a Brief Review of Mathematical Optimization
    Appendix A Brief Review of Mathematical Optimization Ecce signum (Behold the sign; see the proof) Optimization is a mathematical discipline which determines a “best” solution in a quantitatively well-defined sense. It applies to certain mathematically defined prob- lems in, e.g., science, engineering, mathematics, economics, and commerce. Opti- mization theory provides algorithms to solve well-structured optimization problems along with the analysis of those algorithms. This analysis includes necessary and sufficient conditions for the existence of optimal solutions. Optimization problems are expressed in terms of variables (degrees of freedom) and the domain; objective function1 to be optimized; and, possibly, constraints. In unconstrained optimization, the optimum value is sought of an objective function of several variables without any constraints. If, in addition, constraints (equations, inequalities) are present, we have a constrained optimization problem. Often, the optimum value of such a problem is found on the boundary of the region defined by inequality constraints. In many cases, the domain X of the unknown variables x is X = IR n. For such problems the reader is referred to Fletcher (1987), Gill et al. (1981), and Dennis & Schnabel (1983). Problems in which the domain X is a discrete set, e.g., X = IN o r X = ZZ, belong to the field of discrete optimization ; cf. Nemhauser & Wolsey (1988). Dis- crete variables might be used, e.g., to select different physical laws (for instance, different limb-darkening laws) or models which cannot be smoothly mapped to each other. In what follows, we will only consider continuous domains, i.e., X = IR n. In this appendix we assume that the reader is familiar with the basic concepts of calculus and linear algebra and the standard nomenclature of mathematics and theoretical physics.
    [Show full text]