Metaheuristics for Np-Hard Combinatorial Optimization

Total Page:16

File Type:pdf, Size:1020Kb

Metaheuristics for Np-Hard Combinatorial Optimization View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by ScholarBank@NUS METAHEURISTICS FOR NP-HARD COMBINATORIAL OPTIMIZATION PROBLEMS Dinh Trung Hoang (B.Sc, National Uni. of Vietnam) A THESIS SUBMITTED FOR THE DEGREE OF DOCTOR OF PHILOSOPHY DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING NATIONAL UNIVERSITY OF SINGAPORE 2008 iii ACKNOWLEDGEMENTS I would like to extend my gratitude and deepest appreciation to Dr. A. A. Ma- mun for his inspiration, excellent guidance, endless support and encourage- ment during the work. He has always made himself available for discussion whenever I encountered problems with the project. His erudite knowledge and deepest insights have been the most inspiration and made this research work a rewarding experience. I owe an immense dept of gratitude to him for having given me the curiosity about metaheuristics. Without his kindest help, this thesis and many others would have been impossible. Thanks also go to the faculties in Electrical & Computer Engineering Depart- ment in National University of Singapore, for their constant encouragement and valuable advice. Acknowledgement is extended to National University of Singapore for award- ing me the research scholarship and providing me the research facilities and challenging environment during my study time. I sincerely acknowledge all the help from all members in Mechatronics & Au- tomation lab, the National University of Singapore, in particular, my friends Dr. Tang K.Z., Mr. Trung, Mr. Zhu Zhen, Ms. Liu Jin, and Mr. Guan Feng for their kind assistance and friendship. Last but not least, I would thank my family members for their support, un- derstanding, patience and love during this process of my pursuit of a PhD., especially to my pretty and cunning sister Hang, my silly but handsome brother Hieu for all of their constant support for and sharing with me in whatever problems or happiness I faced or had since day one. During when struggling for preparing the oral defence, I was receiving strong support from iii iv my beloved Mummy who had come to Singapore twice just to help me any single thing and attended my oral defence. I am very appreciated all of what she has done to me. Also I would thank to my girlfriend Ngoc Kim for her ongoing strong and eternal love gave to me while I was stressful with indus- trial work at TECH and still trying finalizing the last version of the thesis. This acknowledgement would not complete if the great sacrifice of my Dad for his children’s further study was not recalled. This thesis, thereupon, is dedicated to them for their infinite stability margin. iv v METAHEURISTICS FOR NP-HARD COMBINATORIAL OPTIMIZATION PROBLEMS Dinh Trung Hoang National University of Singapore 2008 Abstract Combinatorial Optimization problems (COPs) are highly theoretical and of practical importance. Unfortunately, most of interesting COPs are proved to be intractable. Therefore, approximation approaches to those problems have received much intention since 1970s. During the past decades, a new kind of approximation algorithms, nowadays termed as metaheuristic, has emerged, providing a framework for solving many COPs by exploring the search space efficiently and exploiting the search history effectively. Among approximation algorithms, metaheuristic algorithms are widely recognized as one of the most practical approaches for combinatorial opti- mization problems. Some noticeable representatives of metaheuristics are Simulated Annealing (SA), Tabu Search (TS), Evolutionary Computation (EC), Ant Colony Optimization (ACO) and so on. For many combinatorial opti- mization problems the established metaheuristic algorithms are considered to be the state-of-the-art methods. In this report, we present two parts of work. One is on Ant Colony Optimization; the other is on decomposition- based hybrid metaheuristics. In particular, we propose a model of Ant algorithms that extends Graph- based Ant System (abbreviated as GBAS) model [106]. GBAS is the first and v vi most simple model which is used to study theoretical aspects related to con- vergence properties of ACO metaheuristics. All proposed to-date models for studying the convergence properties of ACO have not considered a widely- used technique which is to balance the exploration and exploitation process in almost all Ant-based algorithms. This technique is well-known in the re- search field of ACO and is called pseudo-random proportional rule or trade-off technique. To study the effectiveness of this technique in Ant-based algo- rithms from convergence perspective, an extended model of GBAS is pro- posed in one part of this report. Not only hold convergence properties as proved in GBAS, our model is also able to elucidate the practical role of this technique in Ant-based algorithms. Inspired by findings from this extended model, we suggest and experi- ment with a time-dependent approach. This approach aims at practically im- proving performance of Ant-based algorithms through a adaptively-adjusting rule for the trade-off technique. To judge the effectiveness of this time-dependent approach, we integrate it into state-of-the-art Ant-based algorithms - which are Ant Colony System (ACS), Max-Min Ant System, Best-Worst Ant System- in two different scenarios: i) use local search procedures and ii) do not use local search procedure in any algorithm. By testing on some medium-scale benchmark instances of Traveling Salesman Problem, we show experimen- tally that the performance of the Ant-based algorithms employing the adap- tively linear adjusting rule has been improved in comparison to that of the original Ant-based algorithms. A field of research on hybridization of metaheuristics with basic tech- niques in Artificial Intelligence and/or Operations Research has emerged re- vi vii cently and rapidly received attention of metaheuristics community. These hybrid metaheuristics aim at efficiently and effectively tackling large-scale real-world instances of COPs. Some findings in literature have suggested that the combination of classical artificial intelligence and operations research techniques with metaheuristics can be very beneficial for dealing with large- scale instances of some COPs. In the part of this report on hybrid meta- heuristics, we present runtime analysis of a scheme of hybridization between metaheuristics and clustering (or decomposition) methods. In particular, we prove that decomposition-based search method formed by combining a de- composition technique with a problem-solving algorithm runs faster than methods that do not utilize decomposition techniques. The speedup gained, however, is bounded and the bounds can be computed in advance. The finding of such bounds has shed some light on theoretically elucidating the runtime efficiency of decomposition-based search algorithms over the non- decomposition-based ones. This is the first work using an unified but problem- and algorithm-independent framework to evaluate the effectiveness and ef- ficiency of decomposition-based search algorithms in term of running time through the comparison to running time of alternative non-decomposition- based search algorithms. Moreover, in that part of this report, we also address concerns over a dis- advantage of decomposition-based methods, which relates to the failure of achieving optimal solutions in some scenarios. Those scenarios are simul- taneously dependent on both problem-solving methods and structure of in- stances of optimization problems. Our finding suggests that given an inex- act decomposition-based method for solving an optimization problem there vii viii probably exist some instances of the problem for which the method fails to include any optimal solutions in the search space. This means no optimal solution can be found using such a method no matter how much time any algorithmic instance of the method is given to run. To illustrate, we propose a simple inexact decomposition-based method to solve the Euclidean Traveling Salesman Problem (abbreviated as ETSP) and derive a sufficient condition on structure of ETSP instances such that if an in- stance of ETSP satisfies that condition, all of its optimal solutions will be con- tained into the search space generated by the proposed method; otherwise no optimal solution appears in that search space. However, the sufficient condition is applicable for a restricted number of subproblems, thus to make that condition more robust and applicable to large scale instances we extend it with additional assumptions on the structure of those large scale instances. The experimental results show that performance of a decomposition-based algorithm using ACS and derived from the sufficient condition is better than that of ACS on the same tests consisted of large scale clustered ETSP. viii TABLE OF CONTENTS ix TABLE OF CONTENTS Acknowledgements . iii Table of Contents . ix List of Tables . xiii List of Figures . xvi 1 Introduction 1 1.1 Background and Motivation . 1 1.1.1 Combinatorial Optimization . 2 1.1.1.1 The Optimization Problem . 2 1.1.1.2 Combinatorial Optimization . 3 1.1.2 On the Computational Complexity of Algorithms and No Free Lunch Theorem . 4 1.1.2.1 Computational Complexity of Algorithms and P vs NP 4 1.1.2.2 The No Free Lunch Theorem - a priory equivalence of search algorithms . 5 1.1.3 Exact versus approximate approaches . 7 1.1.4 Motivation . 9 1.2 Aims and Scope . 16 2 Literature Review 22 2.1 Metaheuristics - concepts, classification and characteristics . 23 2.1.1 What is a Metaheuristic ? . 23 2.1.2 Classification of Metaheuristics . 26 2.1.3 Diversification and Intensification in Metaheuristics . 33 2.2 Some state-of-the-art metaheuristics . 38 2.2.1 Population-based Approaches . 39 2.2.1.1 Evolutionary Computation . 39 2.2.1.2 Scatter Search and Path Relinking . 41 2.2.1.2.1 Scatter Search . 42 2.2.1.2.2 Path Relinking . 44 2.2.1.3 Estimation of Distribution Algorithms - EDAs . 47 2.2.1.4 Ant Colony Optimization - ACO . 48 2.2.2 Trajectory Approaches .
Recommended publications
  • 4. Convex Optimization Problems
    Convex Optimization — Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form • convex optimization problems • quasiconvex optimization • linear optimization • quadratic optimization • geometric programming • generalized inequality constraints • semidefinite programming • vector optimization • 4–1 Optimization problem in standard form minimize f0(x) subject to f (x) 0, i =1,...,m i ≤ hi(x)=0, i =1,...,p x Rn is the optimization variable • ∈ f : Rn R is the objective or cost function • 0 → f : Rn R, i =1,...,m, are the inequality constraint functions • i → h : Rn R are the equality constraint functions • i → optimal value: p⋆ = inf f (x) f (x) 0, i =1,...,m, h (x)=0, i =1,...,p { 0 | i ≤ i } p⋆ = if problem is infeasible (no x satisfies the constraints) • ∞ p⋆ = if problem is unbounded below • −∞ Convex optimization problems 4–2 Optimal and locally optimal points x is feasible if x dom f and it satisfies the constraints ∈ 0 ⋆ a feasible x is optimal if f0(x)= p ; Xopt is the set of optimal points x is locally optimal if there is an R> 0 such that x is optimal for minimize (over z) f0(z) subject to fi(z) 0, i =1,...,m, hi(z)=0, i =1,...,p z x≤ R k − k2 ≤ examples (with n =1, m = p =0) f (x)=1/x, dom f = R : p⋆ =0, no optimal point • 0 0 ++ f (x)= log x, dom f = R : p⋆ = • 0 − 0 ++ −∞ f (x)= x log x, dom f = R : p⋆ = 1/e, x =1/e is optimal • 0 0 ++ − f (x)= x3 3x, p⋆ = , local optimum at x =1 • 0 − −∞ Convex optimization problems 4–3 Implicit constraints the standard form optimization problem has an implicit
    [Show full text]
  • Metaheuristics1
    METAHEURISTICS1 Kenneth Sörensen University of Antwerp, Belgium Fred Glover University of Colorado and OptTek Systems, Inc., USA 1 Definition A metaheuristic is a high-level problem-independent algorithmic framework that provides a set of guidelines or strategies to develop heuristic optimization algorithms (Sörensen and Glover, To appear). Notable examples of metaheuristics include genetic/evolutionary algorithms, tabu search, simulated annealing, and ant colony optimization, although many more exist. A problem-specific implementation of a heuristic optimization algorithm according to the guidelines expressed in a metaheuristic framework is also referred to as a metaheuristic. The term was coined by Glover (1986) and combines the Greek prefix meta- (metá, beyond in the sense of high-level) with heuristic (from the Greek heuriskein or euriskein, to search). Metaheuristic algorithms, i.e., optimization methods designed according to the strategies laid out in a metaheuristic framework, are — as the name suggests — always heuristic in nature. This fact distinguishes them from exact methods, that do come with a proof that the optimal solution will be found in a finite (although often prohibitively large) amount of time. Metaheuristics are therefore developed specifically to find a solution that is “good enough” in a computing time that is “small enough”. As a result, they are not subject to combinatorial explosion – the phenomenon where the computing time required to find the optimal solution of NP- hard problems increases as an exponential function of the problem size. Metaheuristics have been demonstrated by the scientific community to be a viable, and often superior, alternative to more traditional (exact) methods of mixed- integer optimization such as branch and bound and dynamic programming.
    [Show full text]
  • Using Ant System Optimization Technique for Approximate Solution to Multi- Objective Programming Problems
    International Journal of Scientific & Engineering Research, Volume 4, Issue 9, September-2013 ISSN 2229-5518 1701 Using Ant System Optimization Technique for Approximate Solution to Multi- objective Programming Problems M. El-sayed Wahed and Amal A. Abou-Elkayer Department of Computer Science, Faculty of Computers and Information Suez Canal University (EGYPT) E-mail: [email protected], Abstract In this paper we use the ant system optimization metaheuristic to find approximate solution to the Multi-objective Linear Programming Problems (MLPP), the advantageous and disadvantageous of the suggested method also discussed focusing on the parallel computation and real time optimization, it's worth to mention here that the suggested method doesn't require any artificial variables the slack and surplus variables are enough, a test example is given at the end to show how the method works. Keywords: Multi-objective Linear. Programming. problems, Ant System Optimization Problems Introduction The Multi-objective Linear. Programming. problems is an optimization method which can be used to find solutions to problems where the Multi-objective function and constraints are linear functions of the decision variables, the intersection of constraints result in a polyhedronIJSER which represents the region that contain all feasible solutions, the constraints equation in the MLPP may be in the form of equalities or inequalities, and the inequalities can be changed to equalities by using slack or surplus variables, one referred to Rao [1], Philips[2], for good start and introduction to the MLPP. The LP type of optimization problems were first recognized in the 30's by economists while developing methods for the optimal allocations of resources, the main progress came from George B.
    [Show full text]
  • Revised Primal Simplex Method Katta G
    6.1 Revised Primal Simplex method Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor First put LP in standard form. This involves floowing steps. • If a variable has only a lower bound restriction, or only an upper bound restriction, replace it by the corresponding non- negative slack variable. • If a variable has both a lower bound and an upper bound restriction, transform lower bound to zero, and list upper bound restriction as a constraint (for this version of algorithm only. In bounded variable simplex method both lower and upper bound restrictions are treated as restrictions, and not as constraints). • Convert all inequality constraints as equations by introducing appropriate nonnegative slack for each. • If there are any unrestricted variables, eliminate each of them one by one by performing a pivot step. Each of these reduces no. of variables by one, and no. of constraints by one. This 128 is equivalent to having them as permanent basic variables in the tableau. • Write obj. in min form, and introduce it as bottom row of original tableau. • Make all RHS constants in remaining constraints nonnega- tive. 0 Example: Max z = x1 − x2 + x3 + x5 subject to x1 − x2 − x4 − x5 ≥ 2 x2 − x3 + x5 + x6 ≤ 11 x1 + x2 + x3 − x5 =14 −x1 + x4 =6 x1 ≥ 1;x2 ≤ 1; x3;x4 ≥ 0; x5;x6 unrestricted. 129 Revised Primal Simplex Algorithm With Explicit Basis Inverse. INPUT NEEDED: Problem in standard form, original tableau, and a primal feasible basic vector. Original Tableau x1 ::: xj ::: xn −z a11 ::: a1j ::: a1n 0 b1 . am1 ::: amj ::: amn 0 bm c1 ::: cj ::: cn 1 α Initial Setup: Let xB be primal feasible basic vector and Bm×m be associated basis.
    [Show full text]
  • Chapter 6: Gradient-Free Optimization
    AA222: MDO 133 Monday 30th April, 2012 at 16:25 Chapter 6 Gradient-Free Optimization 6.1 Introduction Using optimization in the solution of practical applications we often encounter one or more of the following challenges: • non-differentiable functions and/or constraints • disconnected and/or non-convex feasible space • discrete feasible space • mixed variables (discrete, continuous, permutation) • large dimensionality • multiple local minima (multi-modal) • multiple objectives Gradient-based optimizers are efficient at finding local minima for high-dimensional, nonlinearly- constrained, convex problems; however, most gradient-based optimizers have problems dealing with noisy and discontinuous functions, and they are not designed to handle multi-modal problems or discrete and mixed discrete-continuous design variables. Consider, for example, the Griewank function: n n x2 f(x) = P i − Q cos pxi + 1 4000 i i=1 i=1 (6.1) −600 ≤ xi ≤ 600 Mixed (Integer-Continuous) Figure 6.1: Graphs illustrating the various types of functions that are problematic for gradient- based optimization algorithms AA222: MDO 134 Monday 30th April, 2012 at 16:25 Figure 6.2: The Griewank function looks deceptively smooth when plotted in a large domain (left), but when you zoom in, you can see that the design space has multiple local minima (center) although the function is still smooth (right) How we could find the best solution for this example? • Multiple point restarts of gradient (local) based optimizer • Systematically search the design space • Use gradient-free optimizers Many gradient-free methods mimic mechanisms observed in nature or use heuristics. Unlike gradient-based methods in a convex search space, gradient-free methods are not necessarily guar- anteed to find the true global optimal solutions, but they are able to find many good solutions (the mathematician's answer vs.
    [Show full text]
  • An Annotated Bibliography of Network Interior Point Methods
    AN ANNOTATED BIBLIOGRAPHY OF NETWORK INTERIOR POINT METHODS MAURICIO G.C. RESENDE AND GERALDO VEIGA Abstract. This paper presents an annotated bibliography on interior point methods for solving network flow problems. We consider single and multi- commodity network flow problems, as well as preconditioners used in imple- mentations of conjugate gradient methods for solving the normal systems of equations that arise in interior network flow algorithms. Applications in elec- trical engineering and miscellaneous papers complete the bibliography. The collection includes papers published in journals, books, Ph.D. dissertations, and unpublished technical reports. 1. Introduction Many problems arising in transportation, communications, and manufacturing can be modeled as network flow problems (Ahuja et al. (1993)). In these problems, one seeks an optimal way to move flow, such as overnight mail, information, buses, or electrical currents, on a network, such as a postal network, a computer network, a transportation grid, or a power grid. Among these optimization problems, many are special classes of linear programming problems, with combinatorial properties that enable development of efficient solution techniques. The focus of this annotated bibliography is on recent computational approaches, based on interior point methods, for solving large scale network flow problems. In the last two decades, many other approaches have been developed. A history of computational approaches up to 1977 is summarized in Bradley et al. (1977). Several computational studies established the fact that specialized network simplex algorithms are orders of magnitude faster than the best linear programming codes of that time. (See, e.g. the studies in Glover and Klingman (1981); Grigoriadis (1986); Kennington and Helgason (1980) and Mulvey (1978)).
    [Show full text]
  • The Simplex Algorithm in Dimension Three1
    The Simplex Algorithm in Dimension Three1 Volker Kaibel2 Rafael Mechtel3 Micha Sharir4 G¨unter M. Ziegler3 Abstract We investigate the worst-case behavior of the simplex algorithm on linear pro- grams with 3 variables, that is, on 3-dimensional simple polytopes. Among the pivot rules that we consider, the “random edge” rule yields the best asymptotic behavior as well as the most complicated analysis. All other rules turn out to be much easier to study, but also produce worse results: Most of them show essentially worst-possible behavior; this includes both Kalai’s “random-facet” rule, which is known to be subexponential without dimension restriction, as well as Zadeh’s de- terministic history-dependent rule, for which no non-polynomial instances in general dimensions have been found so far. 1 Introduction The simplex algorithm is a fascinating method for at least three reasons: For computa- tional purposes it is still the most efficient general tool for solving linear programs, from a complexity point of view it is the most promising candidate for a strongly polynomial time linear programming algorithm, and last but not least, geometers are pleased by its inherent use of the structure of convex polytopes. The essence of the method can be described geometrically: Given a convex polytope P by means of inequalities, a linear functional ϕ “in general position,” and some vertex vstart, 1Work on this paper by Micha Sharir was supported by NSF Grants CCR-97-32101 and CCR-00- 98246, by a grant from the U.S.-Israeli Binational Science Foundation, by a grant from the Israel Science Fund (for a Center of Excellence in Geometric Computing), and by the Hermann Minkowski–MINERVA Center for Geometry at Tel Aviv University.
    [Show full text]
  • A Public Transport Bus Assignment Problem: Parallel Metaheuristics Assessment
    David Fernandes Semedo Licenciado em Engenharia Informática A Public Transport Bus Assignment Problem: Parallel Metaheuristics Assessment Dissertação para obtenção do Grau de Mestre em Engenharia Informática Orientador: Prof. Doutor Pedro Barahona, Prof. Catedrático, Universidade Nova de Lisboa Co-orientador: Prof. Doutor Pedro Medeiros, Prof. Associado, Universidade Nova de Lisboa Júri: Presidente: Prof. Doutor José Augusto Legatheaux Martins Arguente: Prof. Doutor Salvador Luís Bettencourt Pinto de Abreu Vogal: Prof. Doutor Pedro Manuel Corrêa Calvente de Barahona Setembro, 2015 A Public Transport Bus Assignment Problem: Parallel Metaheuristics Assess- ment Copyright c David Fernandes Semedo, Faculdade de Ciências e Tecnologia, Universidade Nova de Lisboa A Faculdade de Ciências e Tecnologia e a Universidade Nova de Lisboa têm o direito, perpétuo e sem limites geográficos, de arquivar e publicar esta dissertação através de exemplares impressos reproduzidos em papel ou de forma digital, ou por qualquer outro meio conhecido ou que venha a ser inventado, e de a divulgar através de repositórios científicos e de admitir a sua cópia e distribuição com objectivos educacionais ou de investigação, não comerciais, desde que seja dado crédito ao autor e editor. Aos meus pais. ACKNOWLEDGEMENTS This research was partly supported by project “RtP - Restrict to Plan”, funded by FEDER (Fundo Europeu de Desenvolvimento Regional), through programme COMPETE - POFC (Operacional Factores de Competitividade) with reference 34091. First and foremost, I would like to express my genuine gratitude to my advisor profes- sor Pedro Barahona for the continuous support of my thesis, for his guidance, patience and expertise. His general optimism, enthusiasm and availability to discuss and support new ideas helped and encouraged me to push my work always a little further.
    [Show full text]
  • 11.1 Algorithms for Linear Programming
    6.854 Advanced Algorithms Lecture 11: 10/18/2004 Lecturer: David Karger Scribes: David Schultz 11.1 Algorithms for Linear Programming 11.1.1 Last Time: The Ellipsoid Algorithm Last time, we touched on the ellipsoid method, which was the first polynomial-time algorithm for linear programming. A neat property of the ellipsoid method is that you don’t have to be able to write down all of the constraints in a polynomial amount of space in order to use it. You only need one violated constraint in every iteration, so the algorithm still works if you only have a separation oracle that gives you a separating hyperplane in a polynomial amount of time. This makes the ellipsoid algorithm powerful for theoretical purposes, but isn’t so great in practice; when you work out the details of the implementation, the running time winds up being O(n6 log nU). 11.1.2 Interior Point Algorithms We will finish our discussion of algorithms for linear programming with a class of polynomial-time algorithms known as interior point algorithms. These have begun to be used in practice recently; some of the available LP solvers now allow you to use them instead of Simplex. The idea behind interior point algorithms is to avoid turning corners, since this was what led to combinatorial complexity in the Simplex algorithm. They do this by staying in the interior of the polytope, rather than walking along its edges. A given constrained linear optimization problem is transformed into an unconstrained gradient descent problem. To avoid venturing outside of the polytope when descending the gradient, these algorithms use a potential function that is small at the optimal value and huge outside of the feasible region.
    [Show full text]
  • IEOR 269, Spring 2010 Integer Programming and Combinatorial Optimization
    IEOR 269, Spring 2010 Integer Programming and Combinatorial Optimization Professor Dorit S. Hochbaum Contents 1 Introduction 1 2 Formulation of some ILP 2 2.1 0-1 knapsack problem . 2 2.2 Assignment problem . 2 3 Non-linear Objective functions 4 3.1 Production problem with set-up costs . 4 3.2 Piecewise linear cost function . 5 3.3 Piecewise linear convex cost function . 6 3.4 Disjunctive constraints . 7 4 Some famous combinatorial problems 7 4.1 Max clique problem . 7 4.2 SAT (satisfiability) . 7 4.3 Vertex cover problem . 7 5 General optimization 8 6 Neighborhood 8 6.1 Exact neighborhood . 8 7 Complexity of algorithms 9 7.1 Finding the maximum element . 9 7.2 0-1 knapsack . 9 7.3 Linear systems . 10 7.4 Linear Programming . 11 8 Some interesting IP formulations 12 8.1 The fixed cost plant location problem . 12 8.2 Minimum/maximum spanning tree (MST) . 12 9 The Minimum Spanning Tree (MST) Problem 13 i IEOR269 notes, Prof. Hochbaum, 2010 ii 10 General Matching Problem 14 10.1 Maximum Matching Problem in Bipartite Graphs . 14 10.2 Maximum Matching Problem in Non-Bipartite Graphs . 15 10.3 Constraint Matrix Analysis for Matching Problems . 16 11 Traveling Salesperson Problem (TSP) 17 11.1 IP Formulation for TSP . 17 12 Discussion of LP-Formulation for MST 18 13 Branch-and-Bound 20 13.1 The Branch-and-Bound technique . 20 13.2 Other Branch-and-Bound techniques . 22 14 Basic graph definitions 23 15 Complexity analysis 24 15.1 Measuring quality of an algorithm .
    [Show full text]
  • Chapter 1 Introduction
    Chapter 1 Introduction 1.1 Motivation The simplex method was invented by George B. Dantzig in 1947 for solving linear programming problems. Although, this method is still a viable tool for solving linear programming problems and has undergone substantial revisions and sophistication in implementation, it is still confronted with the exponential worst-case computational effort. The question of the existence of a polynomial time algorithm was answered affirmatively by Khachian [16] in 1979. While the theoretical aspects of this procedure were appealing, computational effort illustrated that the new approach was not going to be a competitive alternative to the simplex method. However, when Karmarkar [15] introduced a new low-order polynomial time interior point algorithm for solving linear programs in 1984, and demonstrated this method to be computationally superior to the simplex algorithm for large, sparse problems, an explosion of research effort in this area took place. Today, a variety of highly effective variants of interior point algorithms exist (see Terlaky [32]). On the other hand, while exterior penalty function approaches have been widely used in the context of nonlinear programming, their application to solve linear programming problems has been comparatively less explored. The motivation of this research effort is to study how several variants of exterior penalty function methods, suitably modified and fine-tuned, perform in comparison with each other, and to provide insights into their viability for solving linear programming problems. Many experimental investigations have been conducted for studying in detail the simplex method and several interior point algorithms. Whereas these methods generally perform well and are sufficiently adequate in practice, it has been demonstrated that solving linear programming problems using a simplex based algorithm, or even an interior-point type of procedure, can be inadequately slow in the presence of complicating constraints, dense coefficient matrices, and ill- conditioning.
    [Show full text]
  • Solving the Maximum Weight Independent Set Problem Application to Indirect Hex-Mesh Generation
    Solving the Maximum Weight Independent Set Problem Application to Indirect Hex-Mesh Generation Dissertation presented by Kilian VERHETSEL for obtaining the Master’s degree in Computer Science and Engineering Supervisor(s) Jean-François REMACLE, Amaury JOHNEN Reader(s) Jeanne PELLERIN, Yves DEVILLE , Julien HENDRICKX Academic year 2016-2017 Acknowledgments I would like to thank my supervisor, Jean-François Remacle, for believing in me, my co-supervisor Amaury Johnen, for his encouragements during this project, as well as Jeanne Pellerin, for her patience and helpful advice. 3 4 Contents List of Notations 7 List of Figures 8 List of Tables 10 List of Algorithms 12 1 Introduction 17 1.1 Background on Indirect Mesh Generation . 17 1.2 Outline . 18 2 State of the Art 19 2.1 Exact Resolution . 19 2.1.1 Approaches Based on Graph Coloring . 19 2.1.2 Approaches Based on MaxSAT . 20 2.1.3 Approaches Based on Integer Programming . 21 2.1.4 Parallel Implementations . 21 2.2 Heuristic Approach . 22 3 Hybrid Approach to Combining Tetrahedra into Hexahedra 25 3.1 Computation of the Incompatibility Graph . 25 3.2 Exact Resolution for Small Graphs . 28 3.2.1 Complete Search . 28 3.2.2 Branch and Bound . 29 3.2.3 Upper Bounds Based on Clique Partitions . 31 3.2.4 Upper Bounds based on Linear Programming . 32 3.3 Construction of an Initial Solution . 35 3.3.1 Local Search Algorithms . 35 3.3.2 Local Search Formulation of the Maximum Weight Independent Set . 36 3.3.3 Strategies to Escape Local Optima .
    [Show full text]