An Efficient Global Optimization Algorithm for Nonlinear Sum-Of

Total Page:16

File Type:pdf, Size:1020Kb

An Efficient Global Optimization Algorithm for Nonlinear Sum-Of An Efficient Global Optimization Algorithm for Nonlinear Sum-of-Ratios Problem ∗y Yun-Chol Jonga a Center of Natural Sciences, University of Science, Pyongyang, DPR Korea May 3, 2012 Abstract This paper presents a practical method for finding the globally optimal solution to nonlinear sum-of-ratios problem arising in image processing, engineering and man- agement. Unlike traditional methods which may get trapped in local minima due to the non-convex nature of this problem, our approach provides a theoretical guar- antee of global optimality. Our algorithm is based on solving a sequence of convex programming problems and has global linear and local superlinear/quadratic rate of convergence. The practical efficiency of the algorithm is demonstrated by numerical experiments for synthetic data. 2010 Mathematics subject Classification: 90C26, 90C32, 65K05 Keywords: Fractional programming, non-convex optimization, global optimization algo- rithm, sum-of ratios problem, guaranteed global optimality. 1 Introduction The sum-of-ratios problem, which is to minimize (maximize) a sum of several fractional func- tions subject to convex constraints, is a non-convex optimization problem that is difficult to solve by traditional optimization methods. The problem arises in many applications such as optimization of the average element shape quality in the finite element method, computer ∗E-mail: [email protected] yAddress: Gwahak-1 dong, Unjong distrct, Pyongyang, DPR Korea 1 graphics and management ([2],[3],[4]). In [4], many problems of projective geometry in- cluding multiview triangulation, camera resectioning and homography estimation have been formulated as the sum-of-ratios problem and a branch-and-bound method has been proposed to find its global solution which relies on recent developments in fractional programming and the theory of convex underestimators ([6],[7],[8],[9],[14],[15]). In the method of [4], number of variables increases as twice as the number of fractional functions involving in the sum and a second-order cone programming problem is needed to solve for obtaining a lower bound of the optimal value in each iteration. Their algorithm is provably optimal, that is, given any tolerance , if the optimization problem is feasible, the algorithm returns a solution which is at most far from the global optimum. The branch-and-bound method requires a lot of computations, has low convergence and it is not easy to find a reasonable branching strategy. Recently there has been some progress made towards finding the global solution to a few of these optimization problems ([16],[17]). However, the resulting algorithm is numerically unstable, computationally expensive and does not generalize for more views or harder prob- lems like resectioning. In [18], linear matrix inequalities were used to approximate the global optimum, but no guarantee of actually obtaining the global optimum is given. Also, there are unsolved problems concerning numerical stability. Robustification using L1-norm was presented in [19], but the approach is restricted to the affine camera model. In this paper, an efficient global optimization algorithm is presented which transforms the sum-of-ratios problem into parametric convex programming problem and finds the global solution successfully. 2 Equivalent parametric convex programming The sum-of-ratios problem seeks to minimize the sum of fractional functions subject to convex constraints, which is formulated as follows. N X min F (x) = Fi(x); i=1 (2.1) subject to gi(x) ≤ 0; i = 1; ··· ; m; x 2 Rn fi(x) where Fi(x) = ; i = 1; ··· ; N; and fi(x); gi(x) and −hi(x) are twice continuously differ- hi(x) entiable convex functions. n Let X = fx 2 R jgi(x) ≤ 0; i = 1; ··· ; mg. It is assumed that fi(x) ≥ 0 and hi(x) > 0 n for every x 2 X, and that intX = fx 2 R jgi(x) < 0; i = 1; ··· ; mg 6= ;. Even with these restrictions the above problem is NP-complete [5]. 2 It is easy to see that the problem (2.1) is equivalent to the following problem. N X min F (x) = βi; i=1 subject to Fi(x) ≤ βi; i = 1; :::; N; (2.2) gi(x) ≤ 0; i = 1; :::; m; x 2 Rn ¯ Lemma 2.1. If (¯x; β) is the solution of the problem (2.2), then there exist u¯i; i = 1; ··· ;N such that x¯ is a solution of the following problem for u =u ¯ and β = β¯. N X min ui(fi(x) − βihi(x)); i=1 (2.3) subject to gi(x) ≤ 0; i = 1; :::; m; x 2 Rn And x¯ also satisfies the following system of equations for u =u ¯ and β = β¯: 1 ui = ; i = 1; :::; N (2.4) hi(x) fi(x) − βihi(x) = 0; i = 1; :::; N (2.5) Proof. The constraint Fi(x) ≤ βi is equivalent to fi(x) − βihi(x) ≤ 0. Let's define the following function for the problem (2.2). N N m X X X L(x; β; w; u; v) = w βi + ui(fi(x) − βihi(x)) + vigi(x): i=1 i=1 i=1 By Fritz-John optimality condition (Theorem 4.2.8 of [1]), there existw; ¯ u¯ = (u ¯1; ··· ; u¯N ) andv ¯ = (v ¯1; ··· ; v¯m) such that N m @L X X = u¯ (rf (¯x) − β¯ rh (¯x)) + v¯ rg (¯x) = 0 (2.6) @x i i i i i i i=1 i=1 @L =w ¯ − u¯ihi(¯x) = 0; i = 1; ··· ;N (2.7) @βi @L ¯ u¯i =u ¯i(fi(¯x) − βihi(¯x)) = 0; i = 1; ··· ;N (2.8) @ui @L vi =v ¯igi(¯x) = 0; i = 1; ··· ; m (2.9) @vi 3 gi(¯x) ≤ 0; v¯i ≥ 0; i = 1; ··· ; m (2.10) ¯ fi(¯x) − βihi(¯x) ≤ 0; u¯i ≥ 0; i = 1; ··· ;N (2.11) w¯ ≥ 0; (w; ¯ u;¯ v¯) 6= (0; 0; 0) (2.12) Suppose thatw ¯ = 0. Then, by (2.7), we haveu ¯ = 0 because hi(x) > 0; i = 1; ··· ;N for all x 2 X. Hence, it follows from (2.6), (2.9), (2.10) and (2.12) that X v¯irgi(¯x) = 0; (2.13) i2I(¯x) X v¯i > 0; v¯i ≥ 0; i 2 I(¯x); (2.14) i2I(¯x) 0 where I(¯x) = fijgi(¯x) = 0; 1 ≤ i ≤ mg. By Slater condition, there exists a point x such that 0 gi(x ) < 0; i = 1; ··· ; m: (2.15) Since gi(x); i = 1; ··· ; m are convex, it follows from (2.15) that T 0 0 rgi(¯x) (x − x¯) ≤ gi(x ) − gi(¯x) < 0; i 2 I(¯x) (2.16) !T 0 P Letting d = x − x¯, from (2.16) and (2.14), we have v¯irgi(¯x) d < 0, which contra- i2I(¯x) dicts (2.13). Thus, we havew ¯ > 0. u¯ v¯ Denoting w¯ and w¯ byu ¯ andv ¯ again, respectively, we see that (2.7) is equivalent to (2.4) and so (2.8) is equivalent to (2.5) becauseu ¯i > 0; i = 1; ··· ;N by (2.4). Given u =u ¯ and β = β¯, (2.6), (2.9) and (2.10) is just the KKT condition for the problem (2.3). Since the problem (2.3) is convex programming for parameters u > 0 and β ≥ 0, the KKT condition is also sufficient optimality condition and thenx ¯ is the solution of (2.3) for u =u ¯ and β = β¯. Remark 2.1. Consider the maximization problem N X max F (x) = Fi(x); i=1 subject to gi(x) ≤ 0; i = 1; ··· ; m; x 2 Rn fi(x) where Fi(x) = ; i = 1; ··· ;N and fi(x); −gi(x) and −hi(x) are concave functions, and hi(x) fi(x) ≥ 0; hi(x) > 0; i = 1; ··· ;N in the feasible set X 4 The above problem is equivalent to the following problem. N X max F (x) = βi; i=1 subject to Fi(x) ≥ βi; i = 1; :::; N; gi(x) ≤ 0; i = 1; :::; m; x 2 Rn Then we can obtain the same result as the Lemma 2.1: If (¯x; β¯) is the solution of the above maximization problem , then there existu ¯i; i = 1; ··· ;N such thatx ¯ is a solution of the following problem for u =u ¯ and β = β¯. N X max ui(fi(x) − βihi(x)); i=1 subject to gi(x) ≤ 0; i = 1; :::; m; x 2 Rn Andx ¯ also satisfies the system of equations (2.4) and (2.5) for u =u ¯ and β = β:¯ The above problem is convex programming for given parameters β ≥ 0 and u > 0. In what follows, all the results for the minimization problem hold true for the maximization problem. Let α = (β; u) denote parameter vector and let Ω = fα = (β; u) 2 R2N j0 ≤ β ≤ βu; 0 < ul ≤ u ≤ uug; where u u u l l l u u u u fi(x) u 1 β = (β1 ; ··· ; βN ); u = (u1; ··· ; uN ); u = (u1 ; ··· ; uN ); βi = max ; ui = max ; x2X hi(x) x2X hi(x) and l 1 ui = min ; i = 1; ··· ; N: x2X hi(x) Remark 2.2. The βu; ul and uu involved in the definition of Ω are needed only for theoretical consideration and they are not needed in solving the problem (2.1) at all. 5 Let x(α) be the solution of the problem (2.3) for fixed α 2 Ω and let N X '(α) = ui(fi(x(α)) − βihi(x(α))) (2.17) i=1 1 i (α) = −fi(x(α)) + βihi(x(α)); i = 1; ··· ; N; (2.18) 2 i (α) = −1 + uihi(x(α)); i = 1; ··· ;N (2.19) Let ^ 1 2 Ω = fα 2 Ωj i (α) = 0; i (α) = 0; i = 1; ··· ;Ng: and 1 i (α) = 0; i = 1; ··· ; N; (2.20) 2 i (α) = 0; i = 1; ··· ;N (2.21) Corollary 2.1.
Recommended publications
  • A Hybrid Global Optimization Method: the One-Dimensional Case Peiliang Xu
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Journal of Computational and Applied Mathematics 147 (2002) 301–314 www.elsevier.com/locate/cam A hybrid global optimization method: the one-dimensional case Peiliang Xu Disaster Prevention Research Institute, Kyoto University, Uji, Kyoto 611-0011, Japan Received 20 February 2001; received in revised form 4 February 2002 Abstract We propose a hybrid global optimization method for nonlinear inverse problems. The method consists of two components: local optimizers and feasible point ÿnders. Local optimizers have been well developed in the literature and can reliably attain the local optimal solution. The feasible point ÿnder proposed here is equivalent to ÿnding the zero points of a one-dimensional function. It warrants that local optimizers either obtain a better solution in the next iteration or produce a global optimal solution. The algorithm by assembling these two components has been proved to converge globally and is able to ÿnd all the global optimal solutions. The method has been demonstrated to perform excellently with an example having more than 1 750 000 local minima over [ −106; 107].c 2002 Elsevier Science B.V. All rights reserved. Keywords: Interval analysis; Hybrid global optimization 1. Introduction Many problems in science and engineering can ultimately be formulated as an optimization (max- imization or minimization) model. In the Earth Sciences, we have tried to collect data in a best way and then to extract the information on, for example, the Earth’s velocity structures and=or its stress=strain state, from the collected data as much as possible.
    [Show full text]
  • Geometric GSI’19 Science of Information Toulouse, 27Th - 29Th August 2019
    ALEAE GEOMETRIA Geometric GSI’19 Science of Information Toulouse, 27th - 29th August 2019 // Program // GSI’19 Geometric Science of Information On behalf of both the organizing and the scientific committees, it is // Welcome message our great pleasure to welcome all delegates, representatives and participants from around the world to the fourth International SEE from GSI’19 chairmen conference on “Geometric Science of Information” (GSI’19), hosted at ENAC in Toulouse, 27th to 29th August 2019. GSI’19 benefits from scientific sponsor and financial sponsors. The 3-day conference is also organized in the frame of the relations set up between SEE and scientific institutions or academic laboratories: ENAC, Institut Mathématique de Bordeaux, Ecole Polytechnique, Ecole des Mines ParisTech, INRIA, CentraleSupélec, Institut Mathématique de Bordeaux, Sony Computer Science Laboratories. We would like to express all our thanks to the local organizers (ENAC, IMT and CIMI Labex) for hosting this event at the interface between Geometry, Probability and Information Geometry. The GSI conference cycle has been initiated by the Brillouin Seminar Team as soon as 2009. The GSI’19 event has been motivated in the continuity of first initiatives launched in 2013 at Mines PatisTech, consolidated in 2015 at Ecole Polytechnique and opened to new communities in 2017 at Mines ParisTech. We mention that in 2011, we // Frank Nielsen, co-chair Ecole Polytechnique, Palaiseau, France organized an indo-french workshop on “Matrix Information Geometry” Sony Computer Science Laboratories, that yielded an edited book in 2013, and in 2017, collaborate to CIRM Tokyo, Japan seminar in Luminy TGSI’17 “Topoplogical & Geometrical Structures of Information”.
    [Show full text]
  • Arxiv:1804.07332V1 [Math.OC] 19 Apr 2018
    Juniper: An Open-Source Nonlinear Branch-and-Bound Solver in Julia Ole Kr¨oger,Carleton Coffrin, Hassan Hijazi, Harsha Nagarajan Los Alamos National Laboratory, Los Alamos, New Mexico, USA Abstract. Nonconvex mixed-integer nonlinear programs (MINLPs) rep- resent a challenging class of optimization problems that often arise in engineering and scientific applications. Because of nonconvexities, these programs are typically solved with global optimization algorithms, which have limited scalability. However, nonlinear branch-and-bound has re- cently been shown to be an effective heuristic for quickly finding high- quality solutions to large-scale nonconvex MINLPs, such as those arising in infrastructure network optimization. This work proposes Juniper, a Julia-based open-source solver for nonlinear branch-and-bound. Leverag- ing the high-level Julia programming language makes it easy to modify Juniper's algorithm and explore extensions, such as branching heuris- tics, feasibility pumps, and parallelization. Detailed numerical experi- ments demonstrate that the initial release of Juniper is comparable with other nonlinear branch-and-bound solvers, such as Bonmin, Minotaur, and Knitro, illustrating that Juniper provides a strong foundation for further exploration in utilizing nonlinear branch-and-bound algorithms as heuristics for nonconvex MINLPs. 1 Introduction Many of the optimization problems arising in engineering and scientific disci- plines combine both nonlinear equations and discrete decision variables. Notable examples include the blending/pooling problem [1,2] and the design and opera- tion of power networks [3,4,5] and natural gas networks [6]. All of these problems fall into the class of mixed-integer nonlinear programs (MINLPs), namely, minimize: f(x; y) s.t.
    [Show full text]
  • Global Optimization, the Gaussian Ensemble, and Universal Ensemble Equivalence
    Probability, Geometry and Integrable Systems MSRI Publications Volume 55, 2007 Global optimization, the Gaussian ensemble, and universal ensemble equivalence MARIUS COSTENIUC, RICHARD S. ELLIS, HUGO TOUCHETTE, AND BRUCE TURKINGTON With great affection this paper is dedicated to Henry McKean on the occasion of his 75th birthday. ABSTRACT. Given a constrained minimization problem, under what condi- tions does there exist a related, unconstrained problem having the same mini- mum points? This basic question in global optimization motivates this paper, which answers it from the viewpoint of statistical mechanics. In this context, it reduces to the fundamental question of the equivalence and nonequivalence of ensembles, which is analyzed using the theory of large deviations and the theory of convex functions. In a 2000 paper appearing in the Journal of Statistical Physics, we gave nec- essary and sufficient conditions for ensemble equivalence and nonequivalence in terms of support and concavity properties of the microcanonical entropy. In later research we significantly extended those results by introducing a class of Gaussian ensembles, which are obtained from the canonical ensemble by adding an exponential factor involving a quadratic function of the Hamiltonian. The present paper is an overview of our work on this topic. Our most important discovery is that even when the microcanonical and canonical ensembles are not equivalent, one can often find a Gaussian ensemble that satisfies a strong form of equivalence with the microcanonical ensemble known as universal equivalence. When translated back into optimization theory, this implies that an unconstrained minimization problem involving a Lagrange multiplier and a quadratic penalty function has the same minimum points as the original con- strained problem.
    [Show full text]
  • Beyond Developable: Omputational Design and Fabrication with Auxetic Materials
    Beyond Developable: Computational Design and Fabrication with Auxetic Materials Mina Konakovic´ Keenan Crane Bailin Deng Sofien Bouaziz Daniel Piker Mark Pauly EPFL CMU University of Hull EPFL EPFL Figure 1: Introducing a regular pattern of slits turns inextensible, but flexible sheet material into an auxetic material that can locally expand in an approximately uniform way. This modified deformation behavior allows the material to assume complex double-curved shapes. The shoe model has been fabricated from a single piece of metallic material using a new interactive rationalization method based on conformal geometry and global, non-linear optimization. Thanks to our global approach, the 2D layout of the material can be computed such that no discontinuities occur at the seam. The center zoom shows the region of the seam, where one row of triangles is doubled to allow for easy gluing along the boundaries. The base is 3D printed. Abstract 1 Introduction We present a computational method for interactive 3D design and Recent advances in material science and digital fabrication provide rationalization of surfaces via auxetic materials, i.e., flat flexible promising opportunities for industrial and product design, engi- material that can stretch uniformly up to a certain extent. A key neering, architecture, art and science [Caneparo 2014; Gibson et al. motivation for studying such material is that one can approximate 2015]. To bring these innovations to fruition, effective computational doubly-curved surfaces (such as the sphere) using only flat pieces, tools are needed that link creative design exploration to material making it attractive for fabrication. We physically realize surfaces realization. A versatile approach is to abstract material and fabrica- by introducing cuts into approximately inextensible material such tion constraints into suitable geometric representations which are as sheet metal, plastic, or leather.
    [Show full text]
  • Couenne: a User's Manual
    couenne: a user’s manual Pietro Belotti⋆ Dept. of Mathematical Sciences, Clemson University Clemson SC 29634. Abstract. This is a short user’s manual for the couenne open-source software for global optimization. It provides downloading and installation instructions, an explanation to all options available in couenne, and suggestions on when to use some of its features. 1 Introduction couenne is an Open Source code for solving Global Optimization problems, i.e., problems of the form (P) min f(x) s.t. gj(x) ≤ 0 ∀j ∈ M l u xi ≤ xi ≤ xi ∀i ∈ N0 Z I xi ∈ ∀i ∈ N0 ⊆ N0, n n where f : R → R and, for all j ∈ M, gj : R → R are multivariate (possibly nonconvex) functions, n = |N0| is the number of variables, and x = (xi)i∈N0 is the n-vector of variables. We assume that f and all gj’s are factorable, i.e., they are expressed as Ph Qk ηhk(x), where all functions ηhk(x) are univariate. couenne is part of the COIN-OR infrastructure for Operations Research software1. It is a reformulation-based spatial branch&bound (sBB) [10,11,12], and it implements: – linearization; – branching; – heuristics to find feasible solutions; – bound reduction. Its main purpose is to provide the Optimization community with an Open- Source tool that can be specialized to handle specific classes of MINLP problems, or improved by adding more efficient linearization, branching, heuristic, and bound reduction techniques. ⋆ Email: [email protected] 1 See http://www.coin-or.org Web resources. The homepage of couenne is on the COIN-OR website: http://www.coin-or.org/Couenne and shows a brief description of couenne and some useful links.
    [Show full text]
  • Stochastic Optimization,” in Handbook of Computational Statistics: Concepts and Methods (2Nd Ed.) (J
    Citation: Spall, J. C. (2012), “Stochastic Optimization,” in Handbook of Computational Statistics: Concepts and Methods (2nd ed.) (J. Gentle, W. Härdle, and Y. Mori, eds.), Springer−Verlag, Heidelberg, Chapter 7, pp. 173–201. dx.doi.org/10.1007/978-3-642-21551-3_7 STOCHASTIC OPTIMIZATION James C. Spall The Johns Hopkins University Applied Physics Laboratory 11100 Johns Hopkins Road Laurel, Maryland 20723-6099 U.S.A. [email protected] Stochastic optimization algorithms have been growing rapidly in popularity over the last decade or two, with a number of methods now becoming “industry standard” approaches for solving challenging optimization problems. This chapter provides a synopsis of some of the critical issues associated with stochastic optimization and a gives a summary of several popular algorithms. Much more complete discussions are available in the indicated references. To help constrain the scope of this article, we restrict our attention to methods using only measurements of the criterion (loss function). Hence, we do not cover the many stochastic methods using information such as gradients of the loss function. Section 1 discusses some general issues in stochastic optimization. Section 2 discusses random search methods, which are simple and surprisingly powerful in many applications. Section 3 discusses stochastic approximation, which is a foundational approach in stochastic optimization. Section 4 discusses a popular method that is based on connections to natural evolution—genetic algorithms. Finally, Section 5 offers some concluding remarks. 1 Introduction 1.1 General Background Stochastic optimization plays a significant role in the analysis, design, and operation of modern systems. Methods for stochastic optimization provide a means of coping with inherent system noise and coping with models or systems that are highly nonlinear, high dimensional, or otherwise inappropriate for classical deterministic methods of optimization.
    [Show full text]
  • Global Optimization Methods for Chemical Process Design: Deterministic and Stochastic Approaches
    Korean J. Chem. Eng., 19(2), 227-232 (2002) SHORT COMMUNICATION Global Optimization Methods for Chemical Process Design: Deterministic and Stochastic Approaches Soo Hyoung Choi† and Vasilios Manousiouthakis* School of Chemical Engineering and Technology, Chonbuk National University, Jeonju 561-756, S. Korea *Chemical Engineering Department, University of California Los Angeles, Los Angeles, CA 90095, USA (Received 24 September 2001 • accepted 22 November 2001) Abstract−Process optimization often leads to nonconvex nonlinear programming problems, which may have multiple local optima. There are two major approaches to the identification of the global optimum: deterministic approach and stochastic approach. Algorithms based on the deterministic approach guarantee the global optimality of the obtained solution, but are usually applicable to small problems only. Algorithms based on the stochastic approach, which do not guarantee the global optimality, are applicable to large problems, but inefficient when nonlinear equality con- straints are involved. This paper reviews representative deterministic and stochastic global optimization algorithms in order to evaluate their applicability to process design problems, which are generally large, and have many nonlinear equality constraints. Finally, modified stochastic methods are investigated, which use a deterministic local algorithm and a stochastic global algorithm together to be suitable for such problems. Key words: Global Optimization, Deterministic, Stochastic Approach, Nonconvex, Nonlinear
    [Show full text]
  • CME 338 Large-Scale Numerical Optimization Notes 2
    Stanford University, ICME CME 338 Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2019 Notes 2: Overview of Optimization Software 1 Optimization problems We study optimization problems involving linear and nonlinear constraints: NP minimize φ(x) n x2R 0 x 1 subject to ` ≤ @ Ax A ≤ u; c(x) where φ(x) is a linear or nonlinear objective function, A is a sparse matrix, c(x) is a vector of nonlinear constraint functions ci(x), and ` and u are vectors of lower and upper bounds. We assume the functions φ(x) and ci(x) are smooth: they are continuous and have continuous first derivatives (gradients). Sometimes gradients are not available (or too expensive) and we use finite difference approximations. Sometimes we need second derivatives. We study algorithms that find a local optimum for problem NP. Some examples follow. If there are many local optima, the starting point is important. x LP Linear Programming min cTx subject to ` ≤ ≤ u Ax MINOS, SNOPT, SQOPT LSSOL, QPOPT, NPSOL (dense) CPLEX, Gurobi, LOQO, HOPDM, MOSEK, XPRESS CLP, lp solve, SoPlex (open source solvers [7, 34, 57]) x QP Quadratic Programming min cTx + 1 xTHx subject to ` ≤ ≤ u 2 Ax MINOS, SQOPT, SNOPT, QPBLUR LSSOL (H = BTB, least squares), QPOPT (H indefinite) CLP, CPLEX, Gurobi, LANCELOT, LOQO, MOSEK BC Bound Constraints min φ(x) subject to ` ≤ x ≤ u MINOS, SNOPT LANCELOT, L-BFGS-B x LC Linear Constraints min φ(x) subject to ` ≤ ≤ u Ax MINOS, SNOPT, NPSOL 0 x 1 NC Nonlinear Constraints min φ(x) subject to ` ≤ @ Ax A ≤ u MINOS, SNOPT, NPSOL c(x) CONOPT, LANCELOT Filter, KNITRO, LOQO (second derivatives) IPOPT (open source solver [30]) Algorithms for finding local optima are used to construct algorithms for more complex optimization problems: stochastic, nonsmooth, global, mixed integer.
    [Show full text]
  • Global Optimization Based on a Statistical Model and Simplicial Partitioning
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector An InternationalJournal computers & mathematics with applkations PERGAMON Computers and Mathematics with Applications 44 (2002) 957-967 www.elsevier.com/locate/camwa Global Optimization Based on a Statistical Model and Simplicial Partitioning A. ~ILINSKAS Institute of Mathematics and Informatics Vilnius, Lithuania antanaszBklt.mii.lt J. ~ILINSKAS Department of Informatics KUT, Studentu str. 50-214b, Kaunas, Lithuania jzilQdsplab.ktu.lt Abstract-A statistical model for global optimization is constructed generalizingsome properties of the Wiener process to the multidimensional case. An approach to the construction of global optimization algorithms is developed using the proposed statistical model. The convergence of an algorithm based on the constructed statistical model and simplicial partitioning is proved. Several versions of the algorithm are implemented and investigated. @ 2002 Elsevier Science Ltd. All rights reserved. Keywords-Global optimization, Statistical models, Partitioning, Simplices, Convergence 1. INTRODUCTION The first statistical model used for the global optimization was the Wiener process [l]. Several one-dimensional algorithms were developed using Wiener or Wiener related models, e.g., [2-71. Extensive testing has shown that the implemented algorithms favorably compete with algorithms based on other approaches [7,8]. The sampling functions of Wiener process are not differentiable almost everywhere with probability one. Recently, a statistical one-dimensional model of smooth functions was constructed whose computational complexity is similar to the computational com- plexity of Wiener model [9]. The nondifferentiability of sampling functions was considered a serious theoretical disadvantage of using Wiener process as a model for global optimization.
    [Show full text]
  • A Naïve Five-Element String Algorithm
    JOURNAL OF SOFTWARE, VOL. 4, NO. 9, NOVEMBER 2009 925 A Naïve Five-Element String Algorithm Yanhong Cui University of Cape Town, Private Bag, Rondebosch 7701, Cape Town, South Africa Email: [email protected] Renkuan Guo and Danni Guo University of Cape Town, Private Bag, Rondebosch 7701, Cape Town, South Africa South African National Biodiversity Institute, Claremont 7735, Cape Town, South Africa Email: [email protected] ; [email protected] Abstract—In this paper, we propose a new global philosophy and its potential scientific value is often optimization algorithm inspired by the human life model in regarded as nonsense, pseudo-science or waste of times. Chinese Traditional Medicine and graph theory, which is Scientists more tend to believe part-theory based genetic named as naïve five-element string algorithm. The new engineering rather than TCM. We are not resisting any algorithm utilizes strings of elements from member set advancements in science and technology, however, we {0,1,2,3,4} to represent the values of candidate solutions (typically represented as vectors in n-dimensional Euclidean also have to accept the cruel realities: only 30% of the space). Except the mathematical operations for evaluating patients or illness could be cured by modern western the objective function, sort procedure, creating initial medicine, and on other hand, no less than 30% of the population randomly, the algorithm only involves if-else patients or illness could be cured by traditional medical logical operation. In contrast to existing global optimization treatments. More and more people accept traditional algorithms, the five-element algorithm engages the simplest medical treatments because of cost-saving and mathematics but reaches the highest searching efficiency.
    [Show full text]
  • Global Dynamic Optimization Adam Benjamin Singer
    Global Dynamic Optimization by Adam Benjamin Singer Submitted to the Department of Chemical Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Chemical Engineering at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 2004 c Massachusetts Institute of Technology 2004. All rights reserved. Author.............................................................. Department of Chemical Engineering June 10, 2004 Certified by. Paul I. Barton Associate Professor of Chemical Engineering Thesis Supervisor Accepted by......................................................... Daniel Blankschtein Professor of Chemical Engineering Chairman, Committee for Graduate Students 2 Global Dynamic Optimization by Adam Benjamin Singer Submitted to the Department of Chemical Engineering on June 10, 2004, in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Chemical Engineering Abstract My thesis focuses on global optimization of nonconvex integral objective functions subject to parameter dependent ordinary differential equations. In particular, effi- cient, deterministic algorithms are developed for solving problems with both linear and nonlinear dynamics embedded. The techniques utilized for each problem classifi- cation are unified by an underlying composition principle transferring the nonconvex- ity of the embedded dynamics into the integral objective function. This composition, in conjunction with control parameterization, effectively transforms the problem into a finite dimensional optimization problem where the objective function is given implic- itly via the solution of a dynamic system. A standard branch-and-bound algorithm is employed to converge to the global solution by systematically eliminating portions of the feasible space by solving an upper bounding problem and convex lower bounding problem at each node. The novel contributions of this work lie in the derivation and solution of these convex lower bounding relaxations.
    [Show full text]