Technical Report Subgradient Optimization for Convex Multiparametric Programming

Total Page:16

File Type:pdf, Size:1020Kb

Technical Report Subgradient Optimization for Convex Multiparametric Programming Technical Report Subgradient Optimization for Convex Multiparametric Programming J.T. Leverenz, H. Lee, M.M. Wiecek Department of Mathematical Sciences Clemson University Clemson, SC [email protected] December 15, 2015 Abstract In this paper we develop a subgradient optimization methodology for convex multiparametric non- linear programs. We define a parametric subgradient, extend some classical optimization results to the multiparametric case, and design a subgradient algorithm that is shown to converge under tradi- tional conditions. We use this algorithm to solve two illustrative example problems and demonstrate its accuracy and efficiency by comparing it to a state-of-the-art multiparametric nonlinear programming algorithm. 1 Introduction The area of multiparametric programming (MPP) has received increased attention in recent years as a means of studying and solving optimization problems in which some of the problem data is unknown or can change. It has recently been applied to problems in model predictive control [16,21,27], multiobjective programming [23], and multilevel hierarchical and decentralized optimization [9]. In these problems the unknown data is represented by parameters contained within a known set and solutions are mappings from the parameter space to optimal decision variable and objective function values. Specifically, MPP involves finding three pieces of information [6]: 1. An exact or approximate expression for the optimal value as a function of the parameters. 2. An exact or approximate expression for the optimal decision variables as functions of the parameters. 3. The partition of the parameter space into critical regions for which the expressions in 1 and 2 are valid. The virtue of parametric solutions to these problems lies in mapping the full range of solutions without knowing the exact conditions represented by the parameters. As conditions change, the optimal solution can quickly be determined by simple function evaluations, bypassing the need for expensive or time-consuming re-optimization of the entire system. This differs from sensitivity analysis, which is focused on posterior evaluation of the local region around a particular, static solution [3, 10]. By contrast, multiparametric optimization methods seek to provide a full map of optimal solutions over the entire parameter space [13,28]. 1 1.1 State of the Art Exact solutions to multiparametric linear (mp-LP) and quadratic problems (mp-QP) can be computed using the Karush-Kuhn-Tucker (KKT) conditions [13, 28]. Methods for general nonlinear problems (mp- NLP) produce approximate solutions and can be broadly categorized into three areas: pathfollowing or homotopy methods [11,15], parameter space partition methods [2,25], and problem approximation methods [7, 8, 20]. A survey of recent advances in the latter two areas can be found in [6]. Homotopy methods are only used in the single parameter case. They determine a continuous \path" of KKT points created as the parameter moves along an interval (i.e., the parameter space). Partition methods solve mp-NLPs by dividing the parameter space into smaller sets to approximate the critical regions of the optimal solution. The problem is then solved as a standard NLP at the vertices of each set and optimal value and decision functions are interpolated from the results. Approximation methods replace the mp-NLP with a series of mp-LP or mp-QP problems that can then be solved for exact solutions. Additional approximations can be made until the desired accuracy is achieved. These methods all proceed by solving the primal problem: dual problems are ignored except in some cases to provide parametric bounds on solutions [2,10]. Dual problems play an important role in optimiza- tion and exploring their use with parameters can open new possibilities for solution strategies to mp-NLPs. In particular, subgradient methods are often paired with dual problems. These algorithms are a popular choice to solve nonlinear programs: they tend to be straightforward to implement and are supported by a deep background of theory while remaining a rich field for ongoing research. Subgradient optimization was originally developed for nondifferentiable functions [1, 29] and has been applied in a variety of fields, including solving complex systems [14, 22], nonconvex NLPs [4, 12], and mixed-integer problems [29]. Frechet subgradients of mp-NLPs have previously been explored in [24], but no work has been done on a characterization of the subgradient that can be utilized in an algorithm for solving mp-NLPs. The goal of this paper is to develop subgradient optimization for convex mp-NLPs. To do this we define a parametric subgradient and extend some classical optimization results to the multiparametric case, design an algorithm, and demonstrate that this algorithm can produce results competitive with those of a state-of-the-art mp-NLP approximation method. The outline of the paper is as follows. In Section 2 we state the formulation of an mp-NLP and prove some theoretical results. Section 3 introduces a parametric subgradient algorithm and provides conditions for its convergence. Section 4 demonstrates the algorithm on two example problems. Results are summarized in Section 5. 2 Problem Formulation and Theory This section contains the results needed to define a parametric subgradient and to use it to identify an optimal solution of an mp-NLP. Many current results related to subgradient optimization are extended to the multiparametric case. The formulation of the general mp-NLP depends on whether decision variables are treated as implicit or explicit functions of the parameters. The implicit formulation is commonly found in the literature (e.g. [6, 10, 15]) and is used for applications. The explicit formulation is less commonly used, but is useful for proving many of the theoretical results in this paper. Both formulations are presented in Section 2.1. The parametric subgradient is defined in Section 2.2 and several results are extended to the multiparametric case. The multiparametric Lagrangian dual problem is introduced in Section 2.3 with weak and strong duality relationships shown to hold in the presence of parameters. 2 2.1 Formulation The implicit formulation of an mp-NLP is given by: f ∗(t) = min f(x; t) x st g(x; t) ≤ 0 h(x; t) = 0 (2.1) x 2 S t 2 Ω n p n p where x 2 S ⊂ R , t is a vector of parameters, the set Ω ⊂ R is the parameter space, f : R × R ! R, n p m n p l g : R × R ! R , and h : R × R ! R . For t 2 Ω the feasible set is determined by X(t) = fx 2 S : g(x; t) ≤ 0; h(x; t) = 0g. We assume that X(t) 6= ; 8t 2 Ω, that Ω is a closed, connected set, and that all functions and sets are continuous and convex. The definition of connected set is given by Fiacco [10] in which Ω is connected if there do not exist disjoint open sets A1 and A2 such that Ω ⊂ A1 [ A2,Ω \ A1 6= ;, and Ω \ A2 6= ;. This does not present an obstacle to the practical solution of (2.1) since a disconnected set can be regarded as the union of disjoint sets Ai, each of which can be treated as a separate parameter space. Likewise, the condition that X(t) 6= ; 8t 2 Ω is based on the assumption that infeasible parameter values t can be removed from the parameter space, leaving a collection of disjoint feasible parameter spaces. This is a strong assumption from a practical standpoint since it can not always be known in advance that X(t) 6= ; for the chosen Ω, but the identification and removal of infeasible parameter values is a separate issue beyond the scope of this paper. The solution to (2.1) maps each element of Ω to an optimal solution x∗ and optimal value f ∗. The result of this mapping is an optimal decision function x∗(t) and an optimal value function f ∗(t) = f (x∗(t)). These solutions are typically piecewise functions of the parameters and they partition Ω into critical regions defined by subsets of Ω associated with each part of x∗(t) and f ∗(t). Problem (2.1) is the most convenient and common way to formulate an mp-NLP. However, for the purposes of proving the theoretical results in this paper it is more advantageous to reformulate (2.1) so that the variables x are explicit functions of the parameters t. In the explicit formulation of the general mp-NLP, the variables x are treated as functions x(t) belonging to a vector-valued function space. The following theorem [10, Theorem 2.2.2] motivates the choice of function space. n Theorem 2.1. Consider (2.1) where X(t) is a continuous point-to-set mapping from Ω to R and f is a n ∗ continuous function on R × Ω. Then the optimal value function f (t) is continuous on Ω and the optimal decision functions x∗(t) are upper semicontinuous. With this result an appropriate vector-valued function space equipped with an inner product can be chosen and utilized for the explicit formulation of (2.1). 2 n Definition 2.2. The Hilbert space L (Ω; R ) is the space of square-integrable, vector-valued functions on 2 n n 2 1 2 Ω. For x(t) 2 L (Ω; R ), x :Ω ! R . When n = 1, L (Ω; R ) is written as L (Ω). The inner product is Z T 2 n hx1(t); x2(t)i = x1(t) x2(t)dt for x1(t); x2(t) 2 L (Ω; R ) Ω and the induced norm is Z kx(t)k2 = x(t)T x(t)dt Ω 2 For f1(t); f2(t) 2 L (Ω) we say that f1(t) ≤ f2(t) if the inequality holds for every t 2 Ω.
Recommended publications
  • Nonlinear Optimization Using the Generalized Reduced Gradient Method Revue Française D’Automatique, Informatique, Recherche Opé- Rationnelle
    REVUE FRANÇAISE D’AUTOMATIQUE, INFORMATIQUE, RECHERCHE OPÉRATIONNELLE.RECHERCHE OPÉRATIONNELLE LEON S. LASDON RICHARD L. FOX MARGERY W. RATNER Nonlinear optimization using the generalized reduced gradient method Revue française d’automatique, informatique, recherche opé- rationnelle. Recherche opérationnelle, tome 8, no V3 (1974), p. 73-103 <http://www.numdam.org/item?id=RO_1974__8_3_73_0> © AFCET, 1974, tous droits réservés. L’accès aux archives de la revue « Revue française d’automatique, in- formatique, recherche opérationnelle. Recherche opérationnelle » implique l’accord avec les conditions générales d’utilisation (http://www.numdam.org/ conditions). Toute utilisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fi- chier doit contenir la présente mention de copyright. Article numérisé dans le cadre du programme Numérisation de documents anciens mathématiques http://www.numdam.org/ R.A.LR.O. (8* année, novembre 1974, V-3, p. 73 à 104) NONLINEAR OPTIMIZATION USING THE-GENERALIZED REDUCED GRADIENT METHOD (*) by Léon S. LÀSDON, Richard L. Fox and Margery W. RATNER Abstract. — This paper describes the principles and logic o f a System of computer programs for solving nonlinear optimization problems using a Generalized Reduced Gradient Algorithm, The work is based on earlier work of Âbadie (2). Since this paper was written, many changes have been made in the logic, and significant computational expérience has been obtained* We hope to report on this in a later paper. 1. INTRODUCTION Generalized Reduced Gradient methods are algorithms for solving non- linear programs of gênerai structure. This paper discusses the basic principles of GRG, and constructs a spécifie GRG algorithm. The logic of a computer program implementing this algorithm is presented by means of flow charts and discussion.
    [Show full text]
  • Combinatorial Structures in Nonlinear Programming
    Combinatorial Structures in Nonlinear Programming Stefan Scholtes¤ April 2002 Abstract Non-smoothness and non-convexity in optimization problems often arise because a combinatorial structure is imposed on smooth or convex data. The combinatorial aspect can be explicit, e.g. through the use of ”max”, ”min”, or ”if” statements in a model, or implicit as in the case of bilevel optimization where the combinatorial structure arises from the possible choices of active constraints in the lower level problem. In analyzing such problems, it is desirable to decouple the combinatorial from the nonlinear aspect and deal with them separately. This paper suggests a problem formulation which explicitly decouples the two aspects. We show that such combinatorial nonlinear programs, despite their inherent non-convexity, allow for a convex first order local optimality condition which is generic and tight. The stationarity condition can be phrased in terms of Lagrange multipliers which allows an extension of the popular sequential quadratic programming (SQP) approach to solve these problems. We show that the favorable local convergence properties of SQP are retained in this setting. The computational effectiveness of the method depends on our ability to solve the subproblems efficiently which, in turn, depends on the representation of the governing combinatorial structure. We illustrate the potential of the approach by applying it to optimization problems with max-min constraints which arise, for example, in robust optimization. 1 Introduction Nonlinear programming is nowadays regarded as a mature field. A combination of important algorithmic developments and increased computing power over the past decades have advanced the field to a stage where the majority of practical prob- lems can be solved efficiently by commercial software.
    [Show full text]
  • Nonlinear Integer Programming ∗
    Nonlinear Integer Programming ∗ Raymond Hemmecke, Matthias Koppe,¨ Jon Lee and Robert Weismantel Abstract. Research efforts of the past fifty years have led to a development of linear integer programming as a mature discipline of mathematical optimization. Such a level of maturity has not been reached when one considers nonlinear systems subject to integrality requirements for the variables. This chapter is dedicated to this topic. The primary goal is a study of a simple version of general nonlinear integer problems, where all constraints are still linear. Our focus is on the computational complexity of the problem, which varies significantly with the type of nonlinear objective function in combination with the underlying combinatorial structure. Nu- merous boundary cases of complexity emerge, which sometimes surprisingly lead even to polynomial time algorithms. We also cover recent successful approaches for more general classes of problems. Though no positive theoretical efficiency results are available, nor are they likely to ever be available, these seem to be the currently most successful and interesting approaches for solving practical problems. It is our belief that the study of algorithms motivated by theoretical considera- tions and those motivated by our desire to solve practical instances should and do inform one another. So it is with this viewpoint that we present the subject, and it is in this direction that we hope to spark further research. Raymond Hemmecke Otto-von-Guericke-Universitat¨ Magdeburg, FMA/IMO, Universitatsplatz¨ 2, 39106 Magdeburg, Germany, e-mail: [email protected] Matthias Koppe¨ University of California, Davis, Dept. of Mathematics, One Shields Avenue, Davis, CA, 95616, USA, e-mail: [email protected] Jon Lee IBM T.J.
    [Show full text]
  • An Improved Sequential Quadratic Programming Method Based on Positive Constraint Sets, Chemical Engineering Transactions, 81, 157-162 DOI:10.3303/CET2081027
    157 A publication of CHEMICAL ENGINEERING TRANSACTIONS VOL. 81, 2020 The Italian Association of Chemical Engineering Online at www.cetjournal.it Guest Editors: Petar S. Varbanov, Qiuwang Wang, Min Zeng, Panos Seferlis, Ting Ma, Jiří J. Klemeš Copyright © 2020, AIDIC Servizi S.r.l. DOI: 10.3303/CET2081027 ISBN 978-88-95608-79-2; ISSN 2283-9216 An Improved Sequential Quadratic Programming Method Based on Positive Constraint Sets Li Xiaa,*, Jianyang Linga, Rongshan Bia, Wenying Zhaob, Xiaorong Caob, Shuguang Xianga a State Key Laboratory Base for Eco-Chemical Engineering, College of Chemical Engineering, Qingdao University of Science and Technology, Zhengzhou Road No. 53, Qingdao 266042, China b Chemistry and Chemistry Engineering Faculty,Qilu Normal University, jinan, 250013, Shandong, China [email protected] SQP (sequential quadratic programming) is an effective method to solve nonlinear constraint problems, widely used in chemical process simulation optimization. At present, most optimization methods in general chemical process simulation software have the problems of slow calculation and poor convergence. To solve these problems , an improved SQP optimization method based on positive constraint sets was developed. The new optimization method was used in the general chemical process simulation software named optimization engineers , adopting the traditional Armijo type step rule, L-BFGS (Limited-Memory BFGS) algorithm and rules of positive definite matrix. Compared to the traditional SQP method, the SQP optimization method based on positive constraint sets simplifies the constraint number of corresponding subproblems. L-BFGS algorithm and rules of positive definite matrix simplify the solution of the second derivative matrix, and reduce the amount of storage in the calculation process.
    [Show full text]
  • Appendix a Solving Systems of Nonlinear Equations
    Appendix A Solving Systems of Nonlinear Equations Chapter 4 of this book describes and analyzes the power flow problem. In its ac version, this problem is a system of nonlinear equations. This appendix describes the most common method for solving a system of nonlinear equations, namely, the Newton-Raphson method. This is an iterative method that uses initial values for the unknowns and, then, at each iteration, updates these values until no change occurs in two consecutive iterations. For the sake of clarity, we first describe the working of this method for the case of just one nonlinear equation with one unknown. Then, the general case of n nonlinear equations and n unknowns is considered. We also explain how to directly solve systems of nonlinear equations using appropriate software. A.1 Newton-Raphson Algorithm The Newton-Raphson algorithm is described in this section. A.1.1 One Unknown Consider a nonlinear function f .x/ W R ! R. We aim at finding a value of x so that: f .x/ D 0: (A.1) .0/ To do so, we first consider a given value of x, e.g., x . In general, we have that f x.0/ ¤ 0. Thus, it is necessary to find x.0/ so that f x.0/ C x.0/ D 0. © Springer International Publishing AG 2018 271 A.J. Conejo, L. Baringo, Power System Operations, Power Electronics and Power Systems, https://doi.org/10.1007/978-3-319-69407-8 272 A Solving Systems of Nonlinear Equations Using Taylor series, we can express f x.0/ C x.0/ as: Â Ã.0/ 2 Â Ã.0/ df .x/ x.0/ d2f .x/ f x.0/ C x.0/ D f x.0/ C x.0/ C C ::: dx 2 dx2 (A.2) .0/ Considering only the first two terms in Eq.
    [Show full text]
  • A Sequential Quadratic Programming Algorithm with an Additional Equality Constrained Phase
    A Sequential Quadratic Programming Algorithm with an Additional Equality Constrained Phase Jos´eLuis Morales∗ Jorge Nocedal † Yuchen Wu† December 28, 2008 Abstract A sequential quadratic programming (SQP) method is presented that aims to over- come some of the drawbacks of contemporary SQP methods. It avoids the difficulties associated with indefinite quadratic programming subproblems by defining this sub- problem to be always convex. The novel feature of the approach is the addition of an equality constrained phase that promotes fast convergence and improves performance in the presence of ill conditioning. This equality constrained phase uses exact second order information and can be implemented using either a direct solve or an iterative method. The paper studies the global and local convergence properties of the new algorithm and presents a set of numerical experiments to illustrate its practical performance. 1 Introduction Sequential quadratic programming (SQP) methods are very effective techniques for solv- ing small, medium-size and certain classes of large-scale nonlinear programming problems. They are often preferable to interior-point methods when a sequence of related problems must be solved (as in branch and bound methods) and more generally, when a good estimate of the solution is available. Some SQP methods employ convex quadratic programming sub- problems for the step computation (typically using quasi-Newton Hessian approximations) while other variants define the Hessian of the SQP model using second derivative informa- tion, which can lead to nonconvex quadratic subproblems; see [27, 1] for surveys on SQP methods. ∗Departamento de Matem´aticas, Instituto Tecnol´ogico Aut´onomode M´exico, M´exico. This author was supported by Asociaci´onMexicana de Cultura AC and CONACyT-NSF grant J110.388/2006.
    [Show full text]
  • GRASP for NONLINEAR OPTIMIZATION in This Paper, We
    GRASP FOR NONLINEAR OPTIMIZATION C.N. MENESES, P.M. PARDALOS, AND M.G.C. RESENDE ABSTRACT. We propose a Greedy Randomized Adaptive Search Procedure (GRASP) for solving continuous global optimization problems subject to box constraints. The method was tested on benchmark functions and the computational results show that our approach was able to find in a few seconds optimal solutions for all tested functions despite not using any gradient information about the function being tested. Most metaheuristcs found in the literature have not been capable of finding optimal solutions to the same collection of functions. 1. INTRODUCTION In this paper, we consider the global optimization problem: Find a point x∗ such that n f (x∗) f (x) for all x X, where X is a convex set defined by box constraints in R . For ≤ 2 instance, minimize f (x ;x ) = 100(x x2)2 +(1 x )2 such that 2 x 2 for i = 1;2. 1 2 2 − 1 − 1 − ≤ i ≤ Many optimization methods have been proposed to tackle continuous global optimiza- tion problems with box constraints. Some of these methods use information about the function being examined, e.g. the Hessian matrix or gradient vector. For some functions, however, this type of information can be difficult to obtain. In this paper, we pursue a heuristic approach to global optimization. Even though several heuristics have been designed and implemented for the problem discussed in this paper (see e.g. [1, 15, 16, 17]), we feel that there is room for yet an- other one because these heuristics were not capable of finding optimal solutions to several functions in standard benchmarks for continuous global optimization problems with box constraints.
    [Show full text]
  • Parallel Technique for the Metaheuristic Algorithms Using Devoted Local Search and Manipulating the Solutions Space
    applied sciences Article Parallel Technique for the Metaheuristic Algorithms Using Devoted Local Search and Manipulating the Solutions Space Dawid Połap 1,* ID , Karolina K˛esik 1, Marcin Wo´zniak 1 ID and Robertas Damaševiˇcius 2 ID 1 Institute of Mathematics, Silesian University of Technology, Kaszubska 23, 44-100 Gliwice, Poland; [email protected] (K.K.); [email protected] (M.W.) 2 Department of Software Engineering, Kaunas University of Technology, Studentu 50, LT-51368, Kaunas, Lithuania; [email protected] * Correspondence: [email protected] Received: 16 December 2017; Accepted: 13 February 2018 ; Published: 16 February 2018 Abstract: The increasing exploration of alternative methods for solving optimization problems causes that parallelization and modification of the existing algorithms are necessary. Obtaining the right solution using the meta-heuristic algorithm may require long operating time or a large number of iterations or individuals in a population. The higher the number, the longer the operation time. In order to minimize not only the time, but also the value of the parameters we suggest three proposition to increase the efficiency of classical methods. The first one is to use the method of searching through the neighborhood in order to minimize the solution space exploration. Moreover, task distribution between threads and CPU cores can affect the speed of the algorithm and therefore make it work more efficiently. The second proposition involves manipulating the solutions space to minimize the number of calculations. In addition, the third proposition is the combination of the previous two. All propositions has been described, tested and analyzed due to the use of various test functions.
    [Show full text]
  • A New Successive Linearization Approach for Solving Nonlinear Programming Problems
    Available at http://pvamu.edu/aam Applications and Applied Appl. Appl. Math. Mathematics: ISSN: 1932-9466 An International Journal (AAM) Vol. 14, Issue 1 (June 2019), pp. 437 – 451 A New Successive Linearization Approach for Solving Nonlinear Programming Problems 1Inci Albayrak, 2Mustafa Sivri and 3Gizem Temelcan 1,2 Department of Mathematical Engineering Yildiz Technical University Davutpasa, Istanbul, Turkey [email protected]; [email protected] 3 Department of Computer Programming Istanbul Aydin University Kucukcekmece, Istanbul, Turkey [email protected] Abstract In this paper, we focused on general nonlinear programming (NLP) problems having m nonlinear (or linear) algebraic inequality (or equality or mixed) constraints with a nonlinear (or linear) algebraic objective function in n variables. We proposed a new two-phase-successive linearization approach for solving NLP problems. Aim of this proposed approach is to find a solution of the NLP problem, based on optimal solution of linear programming (LP) problems, satisfying the nonlinear constraints oversensitively. This approach leads to novel methods. Numerical examples are given to illustrate the approach. Keywords: Nonlinear programming problems; Taylor series; Linear programming problems; Hessian matrix; Maclaurin series; Linearization approach MSC 2010 No.: 90C05, 41A58, 93B18 1. Introduction Optimization occurs in many fields. Constructing a mathematical model for real life problems is important for optimizers to find optimal strategies effectively. Optimization problems can be classified according to the nature of the objective function and constraints. An optimization problem can be defined as min (or max) of a single (or multi) objective function, subject to (or not to) single (or multi) nonlinear (or linear) inequality (or equality or mixed) constraints.
    [Show full text]
  • Evolution Strategies for Exploring Protein Energy Landscapes
    Evolution Strategies for Exploring Protein Energy Landscapes Rudy Clausen Emmanuel Sapin Dept of Computer Science Dept of Computer Science George Mason University George Mason University Fairfax, VA 22030 Fairfax, VA 22030 [email protected] [email protected] ∗ Kenneth De Jong Amarda Shehu Dept of Computer Science Dept of Computer Science George Mason University George Mason University Fairfax, VA 22030 Fairfax, VA 22030 [email protected] [email protected] ABSTRACT Keywords The focus on important diseases of our time has prompted evolution strategies; protein modeling; energy landscape. many experimental labs to resolve and deposit functional structures of disease-causing or disease-participating pro- teins. At this point, many functional structures of wildtype 1. INTRODUCTION and disease-involved variants of a protein exist in structural The focus on important diseases of our time has prompted databases. The objective for computational approaches is to many experimental labs to resolve and deposit functional employ such information to discover features of the underly- structures of disease-involved proteins in structural databases ing energy landscape on which functional structures reside. such as the Protein Data Bank (PDB) [4]. Research in the Important questions about which subset of structures are wet laboratory has shown that proteins involved in some of most thermodynamically-stable remain unanswered. The the most complex human diseases, such as cancer, assume challenge is how to transform an essentially discrete prob- different functional
    [Show full text]
  • Chaotic Search Based Equilibrium Optimizer for Dealing with Nonlinear Programming and Petrochemical Application
    processes Article Chaotic Search Based Equilibrium Optimizer for Dealing with Nonlinear Programming and Petrochemical Application Abd Allah A. Mousa 1,*, Mohammed A. El-Shorbagy 2,3 , Ibrahim Mustafa 4 and Hammad Alotaibi 1 1 Department of Mathematics and Statistics, College of Science, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia; [email protected] 2 Department of Mathematics, College of Science and Humanities in Al-Kharj, Prince Sattam bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia; [email protected] 3 Department of Basic Engineering Science, Faculty of Engineering, Menofia University, Shebin El-Kom 32511, Egypt 4 Department of Biomedical Engineering, Faculty of Engineering at Helwan, Helwan University, Cairo, Helwan 11795, Egypt; [email protected] * Correspondence: [email protected] Abstract: In this article, chaotic search based constrained equilibrium optimizer algorithm (CS- CEOA) is suggested by integrating a novel heuristic approach called equilibrium optimizer with a chaos theory-based local search algorithm for solving general non-linear programming. CS-CEOA is consists of two phases, the first one (phase I) aims to detect an approximate solution, avoiding being stuck in local minima. In phase II, the chaos-based search algorithm improves local search performance to obtain the best optimal solution. For every infeasible solution, repair function is implemented in a way such that, a new feasible solution is created on the line segment defined by a feasible reference point and the infeasible solution itself. Due to the fast globally converging of evolutionary algorithms and the chaotic search’s exhaustive search, CS-CEOA could locate the true optimal solution by applying an exhaustive local search for a limited area defined from Phase I.
    [Show full text]
  • Modeling with Nonlinear Programming
    CHAPTER 4 Modeling with Nonlinear Programming By nonlinear programming we intend the solution of the general class of problems that can be formulated as min f(x) subject to the inequality constraints g (x) 0 i ≤ for i = 1,...,p and the equality constraints hi(x) = 0 for i = 1,...,q. We consider here methods that search for the solution using gradient information, i.e., we assume that the function f is differentiable. EXAMPLE 4.1 Given a fixed area of cardboard A construct a box of maximum volume. The nonlinear program for this is min xyz subject to 2xy + 2xz + 2yz = A EXAMPLE 4.2 Consider the problem of determining locations for two new high schools in a set of P subdivisions Nj . Let w1j be the number of students going to school A and w2j be the number of students going to school B from subdivision Nj. Assume that the student capacity of school A is c1 and the capacity of school B is c2 and that the total number of students in each subdivision is rj . We would like to minimize the total distance traveled by all the students given that they may attend either school A or B. It is possible to construct a nonlinear program to determine the locations (a, b) and (c, d) of high schools A and B, respectively assuming the location of each subdivision Ni is modeled as a single point denoted (xi,yi). 1 1 P 2 2 min w (a x )2 + (b y )2 + w (c x )2 + (d y )2 1j − j − j 2j − j − j Xj=1 64 Section 4.1 Unconstrained Optimization in One Dimension 65 subject to the constraints w c ij ≤ i Xj w1j + w2j = rj for j = 1,...,P .
    [Show full text]