Canonical Duality Theory: Connections Between Nonconvex Mechanics and Global Optimization

Total Page:16

File Type:pdf, Size:1020Kb

Canonical Duality Theory: Connections Between Nonconvex Mechanics and Global Optimization Chapter 8 Canonical Duality Theory: Connections between Nonconvex Mechanics and Global Optimization David Y. Gao and Hanif D. Sherali Dedicated to Professor Gilbert Strang on the occasion of his 70th birthday Summary. This chapter presents a comprehensive review and some new developments on canonical duality theory for nonconvex systems. Based on a tricanonical form for quadratic minimization problems, an insightful re- lation between canonical dual transformations and nonlinear (or extended) Lagrange multiplier methods is presented. Connections between complemen- tary variational principles in nonconvex mechanics and Lagrange duality in global optimization are also revealed within the framework of the canonical duality theory. Based on this framework, traditional saddle Lagrange duality and the so-called biduality theory, discovered in convex Hamiltonian systems and d.c. programming, are presented in a unified way; together, they serve as a foundation for the triality theory in nonconvex systems. Applications are illustrated by a class of nonconvex problems in continuum mechanics and global optimization. It is shown that by the use of the canonical dual trans- formation, these nonconvex constrained primal problems can be converted into certain simple canonical dual problems, which can be solved to obtain all extremal points. Optimality conditions (both local and global) for these extrema can be identified by the triality theory. Some new results on gen- eral nonconvex programming with nonlinear constraints are also presented as applications of this canonical duality theory. This review brings some fun- damentally new insights into nonconvex mechanics, global optimization, and computational science. Key words: Duality, triality, Lagrangian duality, nonconvex mechanics, global optimization, nonconvex variations, canonical dual transformations, critical point theory, semilinear equations, NP-hard problems, quadratic pro- gramming David Y. Gao, Department of Mathematics, Virginia Tech, Blacksburg, VA 24061, U.S.A. e-mail: [email protected] Hanif D. Sherali, Grado Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA 24061, U.S.A., e-mail: [email protected] D.Y. Gao, H.D. Sherali, (eds.), Advances in Applied Mathematics and Global Optimization 257 Advances in Mechanics and Mathematics 17, DOI 10.1007/978-0-387-75714-8_8, © Springer Science+Business Media, LLC 2009 258 D.Y.Gao,H.D.Sherali 8.1 Introduction Complementarity and duality are two inspiring, closely related concepts. To- gether they play fundamental roles in multidisciplinary fields of mathematical science, especially in engineering mechanics and optimization. The study of complementarity and duality in mathematics and mechanics has had a long history since the well-known Legendre transformation was formally introduced in 1787. This elegant transformation plays a key role in complementary duality theory. In classical mechanical systems, each energy function definedinaconfiguration space is linked via the Legendre trans- formation with a complementary energy in the dual (source) space, through which the Lagrangian and Hamiltonian can be formulated. In static systems, the convex total potential energy leads to a saddle Lagrangian through which a beautiful saddle min-max duality theory can be constructed. This saddle Lagrangian plays a central role in classical duality theory in convex analy- sis and constrained optimization. In convex dynamic systems, however, the total action is usually a nonconvex d.c. function, that is, the difference of convex kinetic energy and total potential functions. In this case, the classical Lagrangian is no longer a saddle function, but the Hamiltonian is convex in each of its variables. It turns out that instead of the Lagrangian, the Hamilto- nian has been extensively used in convex dynamics. From a geometrical point of view, Lagrangian and Hamiltonian structures in convex systems and d.c. programming display an appealing symmetry, which was widely studied by their founders. Unfortunately, such a symmetry in nonconvex systems breaks down. It turns out that in recent times, tremendous effort and attention have been focused on the role of symmetry and symmetry-breaking in Hamilto- nian mechanics in order to gain a deeper understanding into nonlinear and nonconvex phenomena (see Marsden and Ratiu, 1995). The earliest examples of the Lagrangian duality in engineering mechanics are probably the complementary energy principles proposed by Haar and von K´arm´an in 1909 for elastoperfectly plasticity and Hellinger in 1914 for contin- uum mechanics. Since the boundary conditions in Hellinger’s principle were clarified by E. Reissner in 1953 (see Reissner, 1996), the complementary— dual variational principles and methods have been studied extensively for more than 50 years by applied mathematicians and engineers (see Arthurs, 1980, Noble and Sewell, 1972).1 The development of mathematical duality theory in convex variational analysis and optimization has had a similar his- tory since W. Fenchel proposed the well-known Fenchel transformation in 1949. After the revolutionary concepts of superpotential and subdifferentials introduced by J. J. Moreau in 1966 in the study of frictional mechanics, 1 Eric Reissner (PhD 1938) was a professor in the Department of Mathematics at MIT from 1949 to 1969. According to Gil Strang, since Reissner moved to the Department of Mechanical and Aerospace Engineering at University of California, San Diego in 1969, many applied mathematicians in the field of continuum mechanics, especially solid mechanics, switched from mathematical departments to engineering schools in the United States. 8 Canonical Duality Theory 259 the modern mathematical theory of duality has been well developed by cele- brated mathematicians such as R. T. Rockafellar (1967, 1970, 1974), Moreau (1968), Ekeland (1977, 2003), I. Ekeland and R. Temam (1976), F. H. Clarke (1983, 1985), Auchmuty (1986, 2001), G. Strang (1979—1986), and Moreau, Panagiotopoulos, and Strang (1988). Mathematically speaking, in linear elas- ticity where the total potential energy is convex, the Hellinger—Reissner com- plementary variational principle in engineering mechanics is equivalent to a Fenchel—Moreau—Rockafellar type dual variational problem. The so-called generalized complementary variational principle is actually the saddle La- grangian duality theory, which serves as the foundation for hybrid/mixed finite element methods, and has been subjected to extensive study during the past 40 years (see Strang and Fix (1973), Oden and Lee (1977), Pian and Tong (1980), Pian and Wu (2006), Han (2005), and the references cited therein). Early in the beginning of the last century, Haar and von K´arm´an (1909) had already realized that in nonlinear variational problems of continuum me- chanics, the direct approaches for solving minimum potential energy (primal problem) can only provide upper bounding solutions. However, the minimum complementary energy principle (i.e., the maximum Lagrangian dual prob- lem) provides a lower bound (the mathematical proof of Haar—von K´arm´an’s principle was given by Greenberg in 1949). In safety analysis of engineering structures, the upper and lower bounding approximations to the so-called col- lapse states of the elastoplastic structures are equally important to engineers. Therefore, the primal—dual variational methods have been studied extensively by engineers for solving nonsmooth nonlinear problems (see Gao, 1991, 1992, Maier, 1969, 1970, Temam and Strang, 1980, Casciaro and Cascini, 1982, Gao, 1986, Gao and Hwang, 1988, Gao and Cheung, 1989, Gao and Strang, 1989b, Gao and Wierzbicki, 1989, Gao and Onate, 1990, Tabarrok and Rim- rott, 1994). The article by Maier et al. (2000) serves as an excellent survey on the developments for applications of the Lagrangian duality in engineering structural mechanics. In mathematical programming and computational sci- ence, the so-called primal—dual interior point methods are also based on the Lagrangian duality theory, which has emerged as a revolutionary technique during the last 15 years. Complementary to the interior-point methods, the so-called pan-penalty finite element programming developed by Gao in 1988 (1988a,b) is indeed a primal—dual exterior-point method. He proved that in rigid-perfectly plastic limit analysis, the exterior penalty functional and the associated perturbation method possess an elegant physical meaning, which ledtoanefficient dimension rescaling technique in large-scale nonlinear mixed finite element programming problems (Gao, 1988b). In mathematical programming and analysis, the subject of complementar- ity is closely related to constrained optimization, variational inequality, and fixed point theory. Through the classical Lagrangian duality, the KKT condi- tions of constrained optimization problems lead to corresponding complemen- tarity problems. The primal—dual schema has continued to evolve for linear 260 D.Y.Gao,H.D.Sherali and convex mathematical programming during the past 20 years (see Walk, 1989, Wright, 1998). However, for nonconvex systems, it is well known that the KKT conditions are only necessary under certain regularity conditions for global optimality. Moreover, the underlying nonlinear complementarity problems are fundamentally difficult due to the nonmonotonicity of the non- linear operators, and also, many problems in global optimization are NP-hard. The well-developed Fenchel—Moreau—Rockafellar duality theory will produce a so-called duality gap between the primal
Recommended publications
  • Duality Gap Estimation Via a Refined Shapley--Folkman Lemma | SIAM
    SIAM J. OPTIM. \bigcircc 2020 Society for Industrial and Applied Mathematics Vol. 30, No. 2, pp. 1094{1118 DUALITY GAP ESTIMATION VIA A REFINED SHAPLEY{FOLKMAN LEMMA \ast YINGJIE BIy AND AO TANG z Abstract. Based on concepts like the kth convex hull and finer characterization of noncon- vexity of a function, we propose a refinement of the Shapley{Folkman lemma and derive anew estimate for the duality gap of nonconvex optimization problems with separable objective functions. We apply our result to the network utility maximization problem in networking and the dynamic spectrum management problem in communication as examples to demonstrate that the new bound can be qualitatively tighter than the existing ones. The idea is also applicable to cases with general nonconvex constraints. Key words. nonconvex optimization, duality gap, convex relaxation, network resource alloca- tion AMS subject classifications. 90C26, 90C46 DOI. 10.1137/18M1174805 1. Introduction. The Shapley{Folkman lemma (Theorem 1.1) was stated and used to establish the existence of approximate equilibria in economy with nonconvex preferences [13]. It roughly says that the sum of a large number of sets is close to convex and thus can be used to generalize results on convex objects to nonconvex ones. n m P Theorem 1.1. Let S1;S2;:::;Sn be subsets of R . For each z 2 conv i=1 Si = Pn conv S , there exist points zi 2 conv S such that z = Pn zi and zi 2 S except i=1 i i i=1 i for at most m values of i. Remark 1.2.
    [Show full text]
  • A Hybrid Global Optimization Method: the One-Dimensional Case Peiliang Xu
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Journal of Computational and Applied Mathematics 147 (2002) 301–314 www.elsevier.com/locate/cam A hybrid global optimization method: the one-dimensional case Peiliang Xu Disaster Prevention Research Institute, Kyoto University, Uji, Kyoto 611-0011, Japan Received 20 February 2001; received in revised form 4 February 2002 Abstract We propose a hybrid global optimization method for nonlinear inverse problems. The method consists of two components: local optimizers and feasible point ÿnders. Local optimizers have been well developed in the literature and can reliably attain the local optimal solution. The feasible point ÿnder proposed here is equivalent to ÿnding the zero points of a one-dimensional function. It warrants that local optimizers either obtain a better solution in the next iteration or produce a global optimal solution. The algorithm by assembling these two components has been proved to converge globally and is able to ÿnd all the global optimal solutions. The method has been demonstrated to perform excellently with an example having more than 1 750 000 local minima over [ −106; 107].c 2002 Elsevier Science B.V. All rights reserved. Keywords: Interval analysis; Hybrid global optimization 1. Introduction Many problems in science and engineering can ultimately be formulated as an optimization (max- imization or minimization) model. In the Earth Sciences, we have tried to collect data in a best way and then to extract the information on, for example, the Earth’s velocity structures and=or its stress=strain state, from the collected data as much as possible.
    [Show full text]
  • Subdifferentiability and the Duality
    Subdifferentiability and the Duality Gap Neil E. Gretsky ([email protected]) ∗ Department of Mathematics, University of California, Riverside Joseph M. Ostroy ([email protected]) y Department of Economics, University of California, Los Angeles William R. Zame ([email protected]) z Department of Economics, University of California, Los Angeles Abstract. We point out a connection between sensitivity analysis and the funda- mental theorem of linear programming by characterizing when a linear programming problem has no duality gap. The main result is that the value function is subd- ifferentiable at the primal constraint if and only if there exists an optimal dual solution and there is no duality gap. To illustrate the subtlety of the condition, we extend Kretschmer's gap example to construct (as the value function of a linear programming problem) a convex function which is subdifferentiable at a point but is not continuous there. We also apply the theorem to the continuum version of the assignment model. Keywords: duality gap, value function, subdifferentiability, assignment model AMS codes: 90C48,46N10 1. Introduction The purpose of this note is to point out a connection between sensi- tivity analysis and the fundamental theorem of linear programming. The subject has received considerable attention and the connection we find is remarkably simple. In fact, our observation in the context of convex programming follows as an application of conjugate duality [11, Theorem 16]. Nevertheless, it is useful to give a separate proof since the conclusion is more readily established and its import for linear programming is more clearly seen. The main result (Theorem 1) is that in a linear programming prob- lem there exists an optimal dual solution and there is no duality gap if and only if the value function is subdifferentiable at the primal constraint.
    [Show full text]
  • Geometric GSI’19 Science of Information Toulouse, 27Th - 29Th August 2019
    ALEAE GEOMETRIA Geometric GSI’19 Science of Information Toulouse, 27th - 29th August 2019 // Program // GSI’19 Geometric Science of Information On behalf of both the organizing and the scientific committees, it is // Welcome message our great pleasure to welcome all delegates, representatives and participants from around the world to the fourth International SEE from GSI’19 chairmen conference on “Geometric Science of Information” (GSI’19), hosted at ENAC in Toulouse, 27th to 29th August 2019. GSI’19 benefits from scientific sponsor and financial sponsors. The 3-day conference is also organized in the frame of the relations set up between SEE and scientific institutions or academic laboratories: ENAC, Institut Mathématique de Bordeaux, Ecole Polytechnique, Ecole des Mines ParisTech, INRIA, CentraleSupélec, Institut Mathématique de Bordeaux, Sony Computer Science Laboratories. We would like to express all our thanks to the local organizers (ENAC, IMT and CIMI Labex) for hosting this event at the interface between Geometry, Probability and Information Geometry. The GSI conference cycle has been initiated by the Brillouin Seminar Team as soon as 2009. The GSI’19 event has been motivated in the continuity of first initiatives launched in 2013 at Mines PatisTech, consolidated in 2015 at Ecole Polytechnique and opened to new communities in 2017 at Mines ParisTech. We mention that in 2011, we // Frank Nielsen, co-chair Ecole Polytechnique, Palaiseau, France organized an indo-french workshop on “Matrix Information Geometry” Sony Computer Science Laboratories, that yielded an edited book in 2013, and in 2017, collaborate to CIRM Tokyo, Japan seminar in Luminy TGSI’17 “Topoplogical & Geometrical Structures of Information”.
    [Show full text]
  • Arxiv:2011.09194V1 [Math.OC]
    Noname manuscript No. (will be inserted by the editor) Lagrangian duality for nonconvex optimization problems with abstract convex functions Ewa M. Bednarczuk · Monika Syga Received: date / Accepted: date Abstract We investigate Lagrangian duality for nonconvex optimization prob- lems. To this aim we use the Φ-convexity theory and minimax theorem for Φ-convex functions. We provide conditions for zero duality gap and strong duality. Among the classes of functions, to which our duality results can be applied, are prox-bounded functions, DC functions, weakly convex functions and paraconvex functions. Keywords Abstract convexity · Minimax theorem · Lagrangian duality · Nonconvex optimization · Zero duality gap · Weak duality · Strong duality · Prox-regular functions · Paraconvex and weakly convex functions 1 Introduction Lagrangian and conjugate dualities have far reaching consequences for solution methods and theory in convex optimization in finite and infinite dimensional spaces. For recent state-of the-art of the topic of convex conjugate duality we refer the reader to the monograph by Radu Bot¸[5]. There exist numerous attempts to construct pairs of dual problems in non- convex optimization e.g., for DC functions [19], [34], for composite functions [8], DC and composite functions [30], [31] and for prox-bounded functions [15]. In the present paper we investigate Lagrange duality for general optimiza- tion problems within the framework of abstract convexity, namely, within the theory of Φ-convexity. The class Φ-convex functions encompasses convex l.s.c. Ewa M. Bednarczuk Systems Research Institute, Polish Academy of Sciences, Newelska 6, 01–447 Warsaw Warsaw University of Technology, Faculty of Mathematics and Information Science, ul.
    [Show full text]
  • Lagrangian Duality and Perturbational Duality I ∗
    Lagrangian duality and perturbational duality I ∗ Erik J. Balder Our approach to the Karush-Kuhn-Tucker theorem in [OSC] was entirely based on subdifferential calculus (essentially, it was an outgrowth of the two subdifferential calculus rules contained in the Fenchel-Moreau and Dubovitskii-Milyutin theorems, i.e., Theorems 2.9 and 2.17 of [OSC]). On the other hand, Proposition B.4(v) in [OSC] gives an intimate connection between the subdifferential of a function and the Fenchel conjugate of that function. In the present set of lecture notes this connection forms the central analytical tool by which one can study the connections between an optimization problem and its so-called dual optimization problem (such connections are commonly known as duality relations). We shall first study duality for the convex optimization problem that figured in our Karush-Kuhn-Tucker results. In this simple form such duality is known as Lagrangian duality. Next, in section 2 this is followed by a far-reaching extension of duality to abstract optimization problems, which leads to duality-stability relationships. Then, in section 3 we specialize duality to optimization problems with cone-type constraints, which includes Fenchel duality for semidefinite programming problems. 1 Lagrangian duality An interesting and useful interpretation of the KKT theorem can be obtained in terms of the so-called duality principle (or relationships) for convex optimization. Recall our standard convex minimization problem as we had it in [OSC]: (P ) inf ff(x): g1(x) ≤ 0; ··· ; gm(x) ≤ 0; Ax − b = 0g x2S n and recall that we allow the functions f; g1; : : : ; gm on R to have values in (−∞; +1].
    [Show full text]
  • Bounding the Duality Gap for Problems with Separable Objective
    Bounding the Duality Gap for Problems with Separable Objective Madeleine Udell and Stephen Boyd March 8, 2014 Abstract We consider the problem of minimizing a sum of non-convex func- tions over a compact domain, subject to linear inequality and equality constraints. We consider approximate solutions obtained by solving a convexified problem, in which each function in the objective is replaced by its convex envelope. We propose a randomized algorithm to solve the convexified problem which finds an -suboptimal solution to the original problem. With probability 1, is bounded by a term propor- tional to the number of constraints in the problem. The bound does not depend on the number of variables in the problem or the number of terms in the objective. In contrast to previous related work, our proof is constructive, self-contained, and gives a bound that is tight. 1 Problem and results The problem. We consider the optimization problem Pn minimize f(x) = i=1 fi(xi) subject to Ax ≤ b (P) Gx = h; N ni Pn with variable x = (x1; : : : ; xn) 2 R , where xi 2 R , with i=1 ni = N. m1×N There are m1 linear inequality constraints, so A 2 R , and m2 linear equality constraints, so G 2 Rm2×N . The optimal value of P is denoted p?. The objective function terms are lower semi-continuous on their domains: 1 ni fi : Si ! R, where Si ⊂ R is a compact set. We say that a point x is feasible (for P) if Ax ≤ b, Gx = h, and xi 2 Si, i = 1; : : : ; n.
    [Show full text]
  • Arxiv:1804.07332V1 [Math.OC] 19 Apr 2018
    Juniper: An Open-Source Nonlinear Branch-and-Bound Solver in Julia Ole Kr¨oger,Carleton Coffrin, Hassan Hijazi, Harsha Nagarajan Los Alamos National Laboratory, Los Alamos, New Mexico, USA Abstract. Nonconvex mixed-integer nonlinear programs (MINLPs) rep- resent a challenging class of optimization problems that often arise in engineering and scientific applications. Because of nonconvexities, these programs are typically solved with global optimization algorithms, which have limited scalability. However, nonlinear branch-and-bound has re- cently been shown to be an effective heuristic for quickly finding high- quality solutions to large-scale nonconvex MINLPs, such as those arising in infrastructure network optimization. This work proposes Juniper, a Julia-based open-source solver for nonlinear branch-and-bound. Leverag- ing the high-level Julia programming language makes it easy to modify Juniper's algorithm and explore extensions, such as branching heuris- tics, feasibility pumps, and parallelization. Detailed numerical experi- ments demonstrate that the initial release of Juniper is comparable with other nonlinear branch-and-bound solvers, such as Bonmin, Minotaur, and Knitro, illustrating that Juniper provides a strong foundation for further exploration in utilizing nonlinear branch-and-bound algorithms as heuristics for nonconvex MINLPs. 1 Introduction Many of the optimization problems arising in engineering and scientific disci- plines combine both nonlinear equations and discrete decision variables. Notable examples include the blending/pooling problem [1,2] and the design and opera- tion of power networks [3,4,5] and natural gas networks [6]. All of these problems fall into the class of mixed-integer nonlinear programs (MINLPs), namely, minimize: f(x; y) s.t.
    [Show full text]
  • Deep Neural Networks with Multi-Branch Architectures Are Less Non-Convex
    Deep Neural Networks with Multi-Branch Architectures Are Less Non-Convex Hongyang Zhang Junru Shao Ruslan Salakhutdinov Carnegie Mellon University Carnegie Mellon University Carnegie Mellon University [email protected] [email protected] [email protected] Abstract Several recently proposed architectures of neural networks such as ResNeXt, Inception, Xception, SqueezeNet and Wide ResNet are based on the designing idea of having multiple branches and have demonstrated improved performance in many applications. We show that one cause for such success is due to the fact that the multi-branch architecture is less non-convex in terms of duality gap. The duality gap measures the degree of intrinsic non-convexity of an optimization problem: smaller gap in relative value implies lower degree of intrinsic non-convexity. The challenge is to quantitatively measure the duality gap of highly non-convex problems such as deep neural networks. In this work, we provide strong guarantees of this quantity for two classes of network architectures. For the neural networks with arbitrary activation functions, multi-branch architecture and a variant of hinge loss, we show that the duality gap of both population and empirical risks shrinks to zero as the number of branches increases. This result sheds light on better understanding the power of over-parametrization where increasing the network width tends to make the loss surface less non-convex. For the neural networks with linear activation function and `2 loss, we show that the duality gap of empirical risk is zero. Our two results work for arbitrary depths and adversarial data, while the analytical techniques might be of independent interest to non-convex optimization more broadly.
    [Show full text]
  • A Tutorial on Convex Optimization II: Duality and Interior Point Methods
    A Tutorial on Convex Optimization II: Duality and Interior Point Methods Haitham Hindi Palo Alto Research Center (PARC), Palo Alto, California 94304 email: [email protected] Abstract— In recent years, convex optimization has become a and concepts. For detailed examples and applications, the computational tool of central importance in engineering, thanks reader is refered to [8], [2], [6], [5], [7], [10], [12], [17], to its ability to solve very large, practical engineering problems [9], [25], [16], [31], and the references therein. reliably and efficiently. The goal of this tutorial is to continue the overview of modern convex optimization from where our We now briefly outline the paper. There are two main ACC2004 Tutorial on Convex Optimization left off, to cover sections after this one. Section II is on duality, where we important topics that were omitted there due to lack of space summarize the key ideas the general theory, illustrating and time, and highlight the intimate connections between them. the four main practical applications of duality with simple The topics of duality and interior point algorithms will be our examples. Section III is on interior point algorithms, where focus, along with simple examples. The material in this tutorial is excerpted from the recent book on convex optimization, by the focus is on barrier methods, which can be implemented Boyd and Vandenberghe, who have made available a large easily using only a few key technical components, and yet amount of free course material and freely available software. are highly effective both in theory and in practice. All of the These can be downloaded and used immediately by the reader theory we cover can be readily extended to general conic both for self-study and to solve real problems.
    [Show full text]
  • Global Optimization, the Gaussian Ensemble, and Universal Ensemble Equivalence
    Probability, Geometry and Integrable Systems MSRI Publications Volume 55, 2007 Global optimization, the Gaussian ensemble, and universal ensemble equivalence MARIUS COSTENIUC, RICHARD S. ELLIS, HUGO TOUCHETTE, AND BRUCE TURKINGTON With great affection this paper is dedicated to Henry McKean on the occasion of his 75th birthday. ABSTRACT. Given a constrained minimization problem, under what condi- tions does there exist a related, unconstrained problem having the same mini- mum points? This basic question in global optimization motivates this paper, which answers it from the viewpoint of statistical mechanics. In this context, it reduces to the fundamental question of the equivalence and nonequivalence of ensembles, which is analyzed using the theory of large deviations and the theory of convex functions. In a 2000 paper appearing in the Journal of Statistical Physics, we gave nec- essary and sufficient conditions for ensemble equivalence and nonequivalence in terms of support and concavity properties of the microcanonical entropy. In later research we significantly extended those results by introducing a class of Gaussian ensembles, which are obtained from the canonical ensemble by adding an exponential factor involving a quadratic function of the Hamiltonian. The present paper is an overview of our work on this topic. Our most important discovery is that even when the microcanonical and canonical ensembles are not equivalent, one can often find a Gaussian ensemble that satisfies a strong form of equivalence with the microcanonical ensemble known as universal equivalence. When translated back into optimization theory, this implies that an unconstrained minimization problem involving a Lagrange multiplier and a quadratic penalty function has the same minimum points as the original con- strained problem.
    [Show full text]
  • Linear Complementarity Problems on Extended Second Order Cones (ESOCLCP)
    Linear complementarity problems on extended second order cones S. Z. N´emeth School of Mathematics, University of Birmingham Watson Building, Edgbaston Birmingham B15 2TT, United Kingdom email: [email protected] L. Xiao School of Mathematics, University of Birmingham Watson Building, Edgbaston Birmingham B15 2TT, United Kingdom email: [email protected] November 9, 2018 Abstract In this paper, we study the linear complementarity problems on extended second order cones. We convert a linear complementarity problem on an extended second order cone into a mixed complementarity problem on the non-negative orthant. We state necessary and sufficient conditions for a point to be a solution of the converted problem. We also present solution strategies for this problem, such as the Newton method and Levenberg- Marquardt algorithm. Finally, we present some numerical examples. Keywords: Complementarity Problem, Extended Second Order Cone, Conic Opti- mization 2010 AMS Subject Classification: 90C33, 90C25 1 Introduction arXiv:1707.04268v5 [math.OC] 19 Jan 2018 Although research in cone complementarity problems (see the definition in the beginning of the Preliminaries) goes back a few decades only, the underlying concept of complementarity is much older, being firstly introduced by Karush in 1939 [1]. It seems that the concept of comple- mentarity problems was first considered by Dantzig and Cottle in a technical report [2], for the non-negative orthant. In 1968, Cottle and Dantzig [3] restated the linear programming prob- lem, the quadratic programming problem and the bimatrix game problem as a complementarity problem, which inspired the research in this field (see [4–8]). The complementarity problem is a cross-cutting area of research which has a wide range of applications in economics, finance and other fields.
    [Show full text]