Parallelisation of Hybrid Metaheuristics for COP Solving Mohamed Khalil Labidi

Total Page:16

File Type:pdf, Size:1020Kb

Parallelisation of Hybrid Metaheuristics for COP Solving Mohamed Khalil Labidi Parallelisation of hybrid metaheuristics for COP solving Mohamed Khalil Labidi To cite this version: Mohamed Khalil Labidi. Parallelisation of hybrid metaheuristics for COP solving. Other [cs.OH]. Université Paris sciences et lettres; Université de Tunis El-Manar. Faculté des Sciences de Tunis (Tunisie), 2018. English. NNT : 2018PSLED029. tel-02008495 HAL Id: tel-02008495 https://tel.archives-ouvertes.fr/tel-02008495 Submitted on 5 Feb 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. THÈSE DE DOCTORAT de l’Université de recherche Paris Sciences et Lettres PSL Research University Préparée dans le cadre d’une cotutelle entre L’Université de Tunis El-Manar – Faculté des Sciences de Tunis et l’Université Paris-Dauphine Parallelization of hybrid metaheuristics for COP solving Ecole doctorale n°543 Spécialité Informatique COMPOSITION DU JURY : M. MAHJOUB Ali Ridha Université Paris-Dauphine, Co-Directeur de thèse M. MAHJOUB Zaher Faculté des Sciences de Tunis, Co-directeur de thèse M. YASSINE Adnan Institut Supérieur d'Etudes Logistiques, Rapporteur Soutenue le 20 09 2018 M. LARDEUX Frédéric par Mohamed Khalil LABIDI Faculté des Sciences – Université d’Angers, Rapporteur M. DIARRASSOUBA Ibrahima Université du Havre, Dirigée par Ali Ridha MAHJOUB Examinateur Zaher MAHJOUB M. NAANAA Wady Ecole Nationale d’Ingénieurs de Tunis, h Examinateur M. TRYSTRAM Denis Ecole Nationale Supérieure d’Informatique et de Mathématiques Appliquées, Président du Jury Je dédie ce modeste travail À la mémoire de mon grand-père Mohamed, À la mémoire de ma grand-mère Khadija, À la mémoire de mon oncle Taoufik, À la mémoire de mon ami et collègue Malek, À ma chère soeur Myriam et son époux Khaled, À mon cher frère Ahmed et son épouse Sarah, À mes petits neuveux Mohamed Yassine et Amine, À mes tantes Hedia, Olfa, Monia, Sondess et Wahida, À mes oncles Yassine, Karim et Moncef, À mes cousins et cousines Omar, Baligh, Salim, Houssem, l’adorable Wissem, Merwan, Sammy, Ilyes, Sirine et Amir, À mes frères et soeurs Omar, Nader, Bilel, Mohammed, Iaad & Meriem, Alex, Sophiane, Yousuf, Imrane, Mika, Kadjou, Ilyes, Mehdi, Steven, Mansour, Ayoub, Samir, Nicolas, Imen et Hela, À mon cher ami Robert Jouanique et sa famille, Et surtout à mon adorable grand-mère BiBiBi ainsi que mes très chers parents Slah et Wassila, puissiez vous y trouver le fruit d’un peu de vos efforts et sacrifices. Je vous aime. Remerciements Je voudrais en premier lieu remercier tout particulièrement Monsieur A. Ridha Mahjoub, Professeur à l’Université Paris-Dauphine, de m’avoir apporté son soutien, scientifique et moral, tout au long de ce travail. Il a su me transmettre les connaissances, la rigueur, la motivation et la passion pour la recherche en général et l’optimisation combinatoire en particulier. Il m’a également permis de surmonter les moments difficiles grâce à son très grand intérêt pour mon travail et sa constante disponibilité. Je tiens aussi à remercier très sincèrement Monsieur Zaher Mahjoub, Professeur à la Faculté des Sciences de Tunis, pour ses idées et sa détermination qui m’ont permis d’élargir mon domaine de connaissances et le travail réalisé durant cette thèse. Il m’a également appris, grâce à sa haute exigence scientifique et à sa très grande estime de mes capacités, à ne jamais renoncer et à donner le meilleur de moi-même. Je remercie vivement Monsieur Ibrahima Diarrassouba, maître de conférences à l’université du Havre, pour toute l’aide qu’il m’a apportée durant cette thèse. Cette aide se situe bien sûr au niveau scientifique, grâce à nos collaborations et aux nom- breuses discussions que nous avons eues sur mon travail et à ses conseils et idées. Je remercie également Madame Anissa Omran, maître assistante à la Faculté des Sciences de Tunis, de m’avoir fait confiance en me proposant ce sujet de thèse. Aussi pour avoir toujours cru en moi et en mes capacités de réussir mes travaux de recherche. J’adresse tous mes remerciements à Monsieur Adnan Yassine, Professeur des univer- sités à l’Institut Supérieur d’Etudes Logistiques, ainsi qu’à Monsieur Frédéric Lardeux, maître de conférences à la Faculté des Sciences d’Angers, de l’honneur qu’ils m’ont fait en acceptant d’être rapporteurs de cette thèse. J’exprime ma gratitude à Monsieur Denis Trystram, Professeur à l’Institut poly- technique de Grenoble, et à Monsieur Wady Naanaa, Professeur à l’Ecole Nationale d’Ingénieurs de Tunis, qui ont bien voulu examiner ma thèse. iv Remerciements Je tiens également à remercier très chaleureusement mes différents collègues et amis du LAMSADE qui ont partagé avec moi ces dernières années. Les discussions ani- mées et les moments de détente partagés, leur bonne humeur, leurs encouragements et leur amitié, mais aussi les échanges scientifiques et les travaux effectués ensemble, ont grandement contribué à mon épanouissement scientifique et personnel lors de ces années de thèse. Je tiens à remercier Youcef, Mohamed Lamine, Fabien, Mohammad Amin, Mahdi, Thomas, Raja Haddad, Céline, Hossein, Boris, Ioannis, Meriem, Pedro, Yas- sine, Ons, Manel, Anaëlle, Diana, Hiba, Ian-Christopher, Justin, Marcel, Raja, Hmida, Oussama et surtout notre cher Olivier, avec qui j’ai partagé des moments inoubliables à Paris-Dauphine et ailleurs. Un grand merci aussi à Juliette De Roquefeuil, Stéphanie Salon et Igor Bratusek pour leur patience et leur précieuse aide malgré les complications de mes démarches administratives. J’exprime ma reconnaissance à Robert Jouanique et Mehrzia Ben Ismail ainsi que leurs familles respectives de m’avoir acceulli, aidé et soutenu pendant mes premières années de thèse. Je tiens également à remercier les familles EMFienne et AEMDi- enne pour tous les moments de joie et d’humanité qu’elles m’ont fait vivre en leurs compagnies. Enfin, mes remerciements ne seraient pas complets si je ne remerciais pas ma famille et mes amis, qui m’ont supporté et appuyé toutes ces années et se sont réellement intéressés à mon travail et au domaine hautement incompréhensible de l’optimisation combinatoire en général. Abstract Combinatorial Optimization (CO) is an area of research that is in a constant progress. Solving a Combinatorial Optimization Problem (COP) consists essentially in finding the best solution (s) in a set of feasible solutions called a search space that is usually exponential in cardinality in the size of the problem. To solve COPs, several methods have been proposed in the literature. A distinction is made mainly between exact methods and approximation methods. Since it is not possible to aim for an exact resolution of NP-Complete problems when the size of the problem exceeds a certain threshold, researchers have increasingly used Hybrid (HA) or parallel computing algorithms in recent decades. In this thesis we consider the COP class of Survivability Network Design Problems. We present an approximation parallel hybrid algorithm based on a greedy algorithm, a Lagrangian relaxation algorithm and a genetic algorithm which produces both lower and upper bounds for flow-based formulations. In order to validate the proposed approach, a series of experiments is carried out on several applications: the k-Edge-Connected Hop-Constrained Network Design Problem (kHNDP) when L = 2; 3, The problem of the Steiner k-Edge-Connected Network Design Problem (SkESNDP) and then, two more general problems namely the kHNDP when L ≥ 2 and the k-Edge-Connected Network Design Problem (kESNDP). The experimental study of the parallelisation is presented after that. In the last part of this work, we present two parallel exact algorithms: a distributed Branch-and-Bound and a distributed Branch-and-Cut. A series of experiments has been made on a cluster of 128 processors and interesting speedups has been reached in kHNDP resolution when k = 3 and L = 3. Key words : Branch-and-Bound, Branch-and-Cut, Combinatorial optimization, hybridization, metaheuristic, network design, parallel computing. Résumé L’Optimisation Combinatoire (OC) est un domaine de recherche qui est en perpétuel changement. Résoudre un problème d’optimisation combinatoire (POC) consiste es- sentiellement à trouver la ou les meilleures solutions dans un ensemble des solutions réalisables appelé espace de recherche qui est généralement de cardinalité exponentielle en la taille du problème. Pour résoudre des POC, plusieurs méthodes ont été proposées dans la littérature. On distingue principalement les méthodes exactes et les méthodes d’approximation. Ne pouvant pas viser une résolution exacte de problèmes NP-Complets lorsque la taille du problème dépasse une certain seuil, les chercheurs on eu de plus en plus recours, depuis quelques décennies, aux algorithmes dits hybrides (AH) ou encore au calcul parallèle. Dans cette thèse, nous considérons la classe POC des problèmes de conception d’un réseau fiable. Nous présentons un algorithme hybride parallèle d’approximation basé sur un algorithme glouton, un algorithme de relaxation Lagrangienne et un algorithme génétique, qui produit des bornes inférieure et supérieure pour les formulations à base de flots. Afin de valider l’approche proposée, une série d’expérimentations est menée sur plusieurs applications: le problème de conception d’un réseau k-arête-connexe avec contrainte de borne (kHNDP) avec L = 2; 3, le problème de conception d’un réseau fiable Steiner k-arête-connexe (SkESNDP) et ensuite deux problèmes plus généraux, à savoir le kHNDP avec L = 2 et le problème de conception d’un réseau fi able k- arête connexe kESNDP). L’étude expérimentale de la parallélisation est présentée par la suite.
Recommended publications
  • Oracles, Ellipsoid Method and Their Uses in Convex Optimization Lecturer: Matt Weinberg Scribe:Sanjeev Arora
    princeton univ. F'17 cos 521: Advanced Algorithm Design Lecture 17: Oracles, Ellipsoid method and their uses in convex optimization Lecturer: Matt Weinberg Scribe:Sanjeev Arora Oracle: A person or agency considered to give wise counsel or prophetic pre- dictions or precognition of the future, inspired by the gods. Recall that Linear Programming is the following problem: maximize cT x Ax ≤ b x ≥ 0 where A is a m × n real constraint matrix and x; c 2 Rn. Recall that if the number of bits to represent the input is L, a polynomial time solution to the problem is allowed to have a running time of poly(n; m; L). The Ellipsoid algorithm for linear programming is a specific application of the ellipsoid method developed by Soviet mathematicians Shor(1970), Yudin and Nemirovskii(1975). Khachiyan(1979) applied the ellipsoid method to derive the first polynomial time algorithm for linear programming. Although the algorithm is theoretically better than the Simplex algorithm, which has an exponential running time in the worst case, it is very slow practically and not competitive with Simplex. Nevertheless, it is a very important theoretical tool for developing polynomial time algorithms for a large class of convex optimization problems, which are much more general than linear programming. In fact we can use it to solve convex optimization problems that are even too large to write down. 1 Linear programs too big to write down Often we want to solve linear programs that are too large to even write down (or for that matter, too big to fit into all the hard drives of the world).
    [Show full text]
  • Linear Programming Notes
    Linear Programming Notes Lecturer: David Williamson, Cornell ORIE Scribe: Kevin Kircher, Cornell MAE These notes summarize the central definitions and results of the theory of linear program- ming, as taught by David Williamson in ORIE 6300 at Cornell University in the fall of 2014. Proofs and discussion are mostly omitted. These notes also draw on Convex Optimization by Stephen Boyd and Lieven Vandenberghe, and on Stephen Boyd's notes on ellipsoid methods. Prof. Williamson's full lecture notes can be found here. Contents 1 The linear programming problem3 2 Duailty 5 3 Geometry6 4 Optimality conditions9 4.1 Answer 1: cT x = bT y for a y 2 F(D)......................9 4.2 Answer 2: complementary slackness holds for a y 2 F(D)........... 10 4.3 Answer 3: the verifying y is in F(D)...................... 10 5 The simplex method 12 5.1 Reformulation . 12 5.2 Algorithm . 12 5.3 Convergence . 14 5.4 Finding an initial basic feasible solution . 15 5.5 Complexity of a pivot . 16 5.6 Pivot rules . 16 5.7 Degeneracy and cycling . 17 5.8 Number of pivots . 17 5.9 Varieties of the simplex method . 18 5.10 Sensitivity analysis . 18 5.11 Solving large-scale linear programs . 20 1 6 Good algorithms 23 7 Ellipsoid methods 24 7.1 Ellipsoid geometry . 24 7.2 The basic ellipsoid method . 25 7.3 The ellipsoid method with objective function cuts . 28 7.4 Separation oracles . 29 8 Interior-point methods 30 8.1 Finding a descent direction that preserves feasibility . 30 8.2 The affine-scaling direction .
    [Show full text]
  • 1979 Khachiyan's Ellipsoid Method
    Convex Optimization CMU-10725 Ellipsoid Methods Barnabás Póczos & Ryan Tibshirani Outline Linear programs Simplex algorithm Running time: Polynomial or Exponential? Cutting planes & Ellipsoid methods for LP Cutting planes & Ellipsoid methods for unconstrained minimization 2 Books to Read David G. Luenberger, Yinyu Ye : Linear and Nonlinear Programming Boyd and Vandenberghe: Convex Optimization 3 Back to Linear Programs Inequality form of LPs using matrix notation: Standard form of LPs: We already know: Any LP can be rewritten to an equivalent standard LP 4 Motivation Linear programs can be viewed in two somewhat complementary ways: continuous optimization : continuous variables, convex feasible region continuous objective function combinatorial problems : solutions can be found among the vertices of the convex polyhedron defined by the constraints Issues with combinatorial search methods : number of vertices may be exponentially large, making direct search impossible for even modest size problems 5 History 6 Simplex Method Simplex method: Jumping from one vertex to another, it improves values of the objective as the process reaches an optimal point. It performs well in practice, visiting only a small fraction of the total number of vertices. Running time? Polynomial? or Exponential? 7 The Simplex method is not polynomial time Dantzig observed that for problems with m ≤ 50 and n ≤ 200 the number of iterations is ordinarily less than 1.5m. That time many researchers believed (and tried to prove) that the simplex algorithm is polynomial in the size of the problem (n,m) In 1972, Klee and Minty showed by examples that for certain linear programs the simplex method will examine every vertex. These examples proved that in the worst case, the simplex method requires a number of steps that is exponential in the size of the problem.
    [Show full text]
  • Convex Optimization: Algorithms and Complexity
    Foundations and Trends R in Machine Learning Vol. 8, No. 3-4 (2015) 231–357 c 2015 S. Bubeck DOI: 10.1561/2200000050 Convex Optimization: Algorithms and Complexity Sébastien Bubeck Theory Group, Microsoft Research [email protected] Contents 1 Introduction 232 1.1 Some convex optimization problems in machine learning . 233 1.2 Basic properties of convexity . 234 1.3 Why convexity? . 237 1.4 Black-box model . 238 1.5 Structured optimization . 240 1.6 Overview of the results and disclaimer . 240 2 Convex optimization in finite dimension 244 2.1 The center of gravity method . 245 2.2 The ellipsoid method . 247 2.3 Vaidya’s cutting plane method . 250 2.4 Conjugate gradient . 258 3 Dimension-free convex optimization 262 3.1 Projected subgradient descent for Lipschitz functions . 263 3.2 Gradient descent for smooth functions . 266 3.3 Conditional gradient descent, aka Frank-Wolfe . 271 3.4 Strong convexity . 276 3.5 Lower bounds . 279 3.6 Geometric descent . 284 ii iii 3.7 Nesterov’s accelerated gradient descent . 289 4 Almost dimension-free convex optimization in non-Euclidean spaces 296 4.1 Mirror maps . 298 4.2 Mirror descent . 299 4.3 Standard setups for mirror descent . 301 4.4 Lazy mirror descent, aka Nesterov’s dual averaging . 303 4.5 Mirror prox . 305 4.6 The vector field point of view on MD, DA, and MP . 307 5 Beyond the black-box model 309 5.1 Sum of a smooth and a simple non-smooth term . 310 5.2 Smooth saddle-point representation of a non-smooth function312 5.3 Interior point methods .
    [Show full text]
  • On Some Polynomial-Time Algorithms for Solving Linear Programming Problems
    Available online a t www.pelagiaresearchlibrary.com Pelagia Research Library Advances in Applied Science Research, 2012, 3 (5):3367-3373 ISSN: 0976-8610 CODEN (USA): AASRFC On Some Polynomial-time Algorithms for Solving Linear Programming Problems B. O. Adejo and H. S. Adaji Department of Mathematical Sciences, Kogi State University, Anyigba _____________________________________________________________________________________________ ABSTRACT In this article we survey some Polynomial-time Algorithms for Solving Linear Programming Problems namely: the ellipsoid method, Karmarkar’s algorithm and the affine scaling algorithm. Finally, we considered a test problem which we solved with the methods where applicable and conclusions drawn from the results so obtained. __________________________________________________________________________________________ INTRODUCTION An algorithm A is said to be a polynomial-time algorithm for a problem P, if the number of steps (i. e., iterations) required to solve P on applying A is bounded by a polynomial function O(m n,, L) of dimension and input length of the problem. The Ellipsoid method is a specific algorithm developed by soviet mathematicians: Shor [5], Yudin and Nemirovskii [7]. Khachian [4] adapted the ellipsoid method to derive the first polynomial-time algorithm for linear programming. Although the algorithm is theoretically better than the simplex algorithm, which has an exponential running time in the worst case, it is very slow practically and not competitive with the simplex method. Nevertheless, it is a very important theoretical tool for developing polynomial-time algorithms for a large class of convex optimization problems. Narendra K. Karmarkar is an Indian mathematician; renowned for developing the Karmarkar’s algorithm. He is listed as an ISI highly cited researcher.
    [Show full text]
  • Solving Ellipsoidal Inclusion and Optimal Experimental Design Problems: Theory and Algorithms
    SOLVING ELLIPSOIDAL INCLUSION AND OPTIMAL EXPERIMENTAL DESIGN PROBLEMS: THEORY AND ALGORITHMS. A Dissertation Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy by Selin Damla Ahipas¸aoglu˘ August 2009 c 2009 Selin Damla Ahipas¸aoglu˘ ALL RIGHTS RESERVED SOLVING ELLIPSOIDAL INCLUSION AND OPTIMAL EXPERIMENTAL DESIGN PROBLEMS: THEORY AND ALGORITHMS. Selin Damla Ahipas¸aoglu,˘ Ph.D. Cornell University 2009 This thesis is concerned with the development and analysis of Frank-Wolfe type algorithms for two problems, namely the ellipsoidal inclusion problem of optimization and the optimal experimental design problem of statistics. These two problems are closely related to each other and can be solved simultaneously as discussed in Chapter 1 of this thesis. Chapter 1 introduces the problems in parametric forms. The weak and strong duality relations between them are established and the optimality cri- teria are derived. Based on this discussion, we define -primal feasible and -approximate optimal solutions: these solutions do not necessarily satisfy the optimality criteria but the violation is controlled by the error parameter and can be arbitrarily small. Chapter 2 deals with the most well-known special case of the optimal ex- perimental design problems: the D-optimal design problem and its dual, the Minimum-Volume Enclosing Ellipsoid (MVEE) problem. Chapter 3 focuses on another special case, the A-optimal design problem. In Chapter 4 we focus on a generalization of the optimal experimental design problem in which a sub- set but not all of parameters is being estimated. We focus on the following two problems: the Dk-optimal design problem and the Ak-optimal design prob- lem, generalizations of the D-optimal and the A-optimal design problems, re- spectively.
    [Show full text]
  • An Improved Ellipsoid Method for Solving Convex Differentiable Optimization Problems
    An Improved Ellipsoid Method for Solving Convex Differentiable Optimization Problems Amir Beck∗ Shoham Sabachy October 14, 2012 Abstract We consider the problem of solving convex differentiable problems with simple constraints on which the orthogonal projection operator is easy to com- pute. We devise an improved ellipsoid method that relies on improved deep cuts exploiting the differentiability property of the objective function as well as the ability to compute an orthogonal projection onto the feasible set. This new version of the ellipsoid method does not require two different oracles as the \standard" ellipsoid method does (i.e., separation and subgradient). The lin- ear rate of convergence of the objective function values sequence is proven and several numerical results illustrate the potential advantage of this approach over the classical ellipsoid method. 1 Introduction 1.1 Short Review of the Ellipsoid Method Consider the convex problem min f (x) (P): s.t. x 2 X; ∗Department of Industrial Engineering and Management, Technion|Israel Institute of Technol- ogy, Haifa 32000, Israel. E-mail: [email protected]. yDepartment of mathematics, Technion|Israel Institute of Technology, Haifa 32000, Israel. E- mail: [email protected] 1 where f is a convex function over the closed and convex set X. We assume that the objective function f is subdifferentiable over X and that the problem is solvable with X∗ being its optimal set and f ∗ being its optimal value. One way to tackle the general problem (P) is via the celebrated ellipsoid method, which is one of the most fundamental methods in convex optimization. It was first developed by Yudin and Nemirovski (1976), and Shor (1977) for general convex optimization problems and was then came to awareness with the seminal work of Khachiyan [?] showing { using the ellipsoid method { that linear programming can be solved in a polynomial time.
    [Show full text]
  • Final Review Note
    MS&E310 Final Review Note Final Review Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/˜yyye 1 MS&E310 Final Review Note Convex Cones • A set C is a cone if x 2 C implies αx 2 C for all α > 0 • A convex cone is cone plus convex-set. • Dual cone: ∗ C := fy : y • x ≥ 0 for all x 2 Cg ∗ −C is also called the polar of C. 2 MS&E310 Final Review Note Separating hyperplane theorem The most important theorem about the convex set is the following separating theorem. Theorem 1 (Separating hyperplane theorem) Let C ⊂ E, where E is either Rn or Mn, be a closed convex set and let y be a point exterior to C. Then there is a vector a 2 E such that a • y < inf a • x: x2C 3 MS&E310 Final Review Note Theorem 2 (LP fundamental theorem) Given (LP) and (LD) where A has full row rank m and b 2 Rm and c 2 Rn, and consider the classical LP: (LP ) minimize cT x subject to Ax = b; x ≥ 0; where decision vector x 2 Rn. i) if there is a feasible solution, there is a basic feasible solution (Caratheodory’s´ theorem); ii) if there is an optimal solution, there is an optimal basic solution. Basic Solution: select m linearly independent columns, denoted by the index set B, from A, and solve ABxB = b to determine the m-vector xB. By setting the variables, xN , of x corresponding to the remaining columns of A equal to zeros.
    [Show full text]
  • EFFICIENT METHODS in OPTIMIZATION Th´Eorie De La Programmation Non-Lin´Eaire Anatoli Iouditski
    1 Universit´e Joseph Fourier Master de Math´ematiques Appliqu´ees , 2`eme ann´ee EFFICIENT METHODS IN OPTIMIZATION Th´eorie de la programmation non-lin´eaire Anatoli Iouditski Lecture Notes Optimization problems arise naturally in many application fields. Whatever people do, at some point they get a craving for organizing things in a best possible way. This intention, converted in a mathematical form, appears to be an optimization problem of certain type (think of, say, Optimal Diet Problem). Unfortunately, the next step, consisting of finding a solution to the mathematical model is less trivial. At the first glance, everything looks very simple: many commercial optimization packages are easily available and any user can get a “solution” to his model just by clicking on an icon at the desktop of his PC. However, the question is, how much he could trust it? One of the goals of this course is to show that, despite to their attraction, the general optimization problems very often break the expectations of a naive user. In order to apply these formulations successfully, it is necessary to be aware of some theory, which tells us what we can and what we cannot do with optimization problems. The elements of this theory can be found in each lecture of the course. This course itself is based on the lectures given by Arkadi Nemirovski at Technion in late 1990’s. On the other hand all the errors and inanities you may find here should be put on the account of the name at the title page. http://www-ljk.imag.fr/membres/Anatoli.Iouditski/cours/optimisation-convexe.htm 2 Contents 1 Introduction 5 1.1 General formulation of the problem ......................
    [Show full text]
  • Matroid Optimization and Algorithms Robert E. Bixby and William H
    Matroid Optimization and Algorithms Robert E. Bixby and William H. Cunningham June, 1990 TR90-15 MATROID OPTIMIZATION AND ALGORITHMS by Robert E. Bixby Rice University and William H. Cunningham Carleton University Contents 1. Introduction 2. Matroid Optimization 3. Applications of Matroid Intersection 4. Submodular Functions and Polymatroids 5. Submodular Flows and other General Models 6. Matroid Connectivity Algorithms 7. Recognition of Representability 8. Matroid Flows and Linear Programming 1 1. INTRODUCTION This chapter considers matroid theory from a constructive and algorithmic viewpoint. A substantial part of the developments in this direction have been motivated by opti­ mization. Matroid theory has led to a unification of fundamental ideas of combinatorial optimization as well as to the solution of significant open problems in the subject. In addition to its influence on this larger subject, matroid optimization is itself a beautiful part of matroid theory. The most basic optimizational property of matroids is that for any subset every max­ imal independent set contained in it is maximum. Alternatively, a trivial algorithm max­ imizes any {O, 1 }-valued weight function over the independent sets. Most of matroid op­ timization consists of attempts to solve successive generalizations of this problem. In one direction it is generalized to the problem of finding a largest common independent set of two matroids: the matroid intersection problem. This problem includes the matching problem for bipartite graphs, and several other combinatorial problems. In Edmonds' solu­ tion of it and the equivalent matroid partition problem, he introduced the notions of good characterization (intimately related to the NP class of problems) and matroid (oracle) algorithm.
    [Show full text]
  • The Interior-Point Revolution in Optimization: History, Recent Developments, and Lasting Consequences
    BULLETIN (New Series) OF THE AMERICAN MATHEMATICAL SOCIETY Volume 42, Number 1, Pages 39–56 S 0273-0979(04)01040-7 Article electronically published on September 21, 2004 THE INTERIOR-POINT REVOLUTION IN OPTIMIZATION: HISTORY, RECENT DEVELOPMENTS, AND LASTING CONSEQUENCES MARGARET H. WRIGHT Abstract. Interior methods are a pervasive feature of the optimization land- scape today, but it was not always so. Although interior-point techniques, primarily in the form of barrier methods, were widely used during the 1960s for problems with nonlinear constraints, their use for the fundamental prob- lem of linear programming was unthinkable because of the total dominance of the simplex method. During the 1970s, barrier methods were superseded, nearly to the point of oblivion, by newly emerging and seemingly more efficient alternatives such as augmented Lagrangian and sequential quadratic program- ming methods. By the early 1980s, barrier methods were almost universally regarded as a closed chapter in the history of optimization. This picture changed dramatically in 1984, when Narendra Karmarkar an- nounced a fast polynomial-time interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have continued to transform both the theory and practice of constrained optimization. We present a con- densed, unavoidably incomplete look at classical material and recent research about interior methods. 1. Overview REVOLUTION: (i) a sudden, radical, or complete
    [Show full text]
  • Search Strategies for Global Optimization
    Search Strategies for Global Optimization Megan Hazen A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy University of Washington 2008 Program Authorized to Offer Degree: Electrical Engineering University of Washington Graduate School This is to certify that I have examined this copy of a doctoral dissertation by Megan Hazen and have found that it is complete and satisfactory in all respects, and that any and all revisions required by the final examining committee have been made. Chair of the Supervisory Committee: Maya Gupta Reading Committee: Howard Chizeck Maya Gupta Zelda Zabinsky Date: In presenting this dissertation in partial fulfillment of the requirements for the doctoral degree at the University of Washington, I agree that the Library shall make its copies freely available for inspection. I further agree that extensive copying of this dissertation is allowable only for scholarly purposes, consistent with \fair use" as prescribed in the U.S. Copyright Law. Requests for copying or reproduction of this dissertation may be referred to Proquest Information and Learning, 300 North Zeeb Road, Ann Arbor, MI 48106-1346, 1-800-521-0600, to whom the author has granted \the right to reproduce and sell (a) copies of the manuscript in microform and/or (b) printed copies of the manuscript made from microform." Signature Date University of Washington Abstract Search Strategies for Global Optimization Megan Hazen Chair of the Supervisory Committee: Assistant Professor Maya Gupta Department of Electrical Engineering Global optimizers are powerful tools for searching complicated function spaces such as those found in modern high-fidelity engineering models.
    [Show full text]