G. Gutin, Computational Optimisation, Royal Holloway 2008
Total Page:16
File Type:pdf, Size:1020Kb
Computational Optimisation Gregory Gutin April 4, 2008 Contents 1 Introduction to Computational Optimisation 1 1.1 Introduction.................................... 1 1.2 Algorithm efficiency and problem complexity . ...... 3 1.3 Optimality and practicality . .... 4 2 Introduction to Linear Programming 6 2.1 Linearprogramming(LP)model . 6 2.2 FormulatingproblemsasLPproblems . .... 6 2.3 GraphicalsolutionofLPproblems . .... 9 2.4 Questions ..................................... 13 3 Simplex Method 15 3.1 StandardForm .................................. 15 3.2 SolutionsofLinearSystems . 17 3.3 TheSimplexMethod............................... 18 3.4 Questions ..................................... 22 4 Linear Programming Approaches 25 4.1 Artificial variables and Big-M Method . ..... 25 4.2 Two-PhaseMethod................................ 26 4.3 Shadowprices................................... 28 4.4 Dualproblem................................... 29 4.5 DecompositionofLPproblems . 32 1 CONTENTS 2 4.6 LPsoftware.................................... 34 4.7 Questions ..................................... 34 5 Integer Programming Modeling 36 5.1 Integer Programming vs. Linear Programming . ...... 36 5.2 IPproblems.................................... 37 5.2.1 Travelling salesman problem . 37 5.2.2 KnapsackProblem ............................ 38 5.2.3 Binpackingproblem.. .. .. .. .. .. .. .. 38 5.2.4 Set partitioning/covering/packing problems . ........ 39 5.2.5 Assignment problem and generalized assignment problem ...... 40 5.3 Questions ..................................... 41 6 Branch-and-Bound Algorithm 43 6.1 A Simple Example for Integer and Mixed Programming . ...... 43 6.2 Knapsackexample ................................ 48 6.3 Branchingstrategies . .. .. .. .. .. .. .. .. 50 6.4 MAX-SATExample ............................... 51 6.5 Questions ..................................... 53 7 Construction Heuristics and Local Search 56 7.1 Combinatorial optimisation problems . ....... 56 7.2 Greedy-typealgorithms . 57 7.3 SpecialalgorithmsfortheATSP . 59 7.4 Improvementlocalsearch . 61 7.5 Questions ..................................... 63 8 Computational Analysis of Heuristics 65 8.1 ExperimentswithATSPheuristics . 65 8.2 Testbeds...................................... 65 8.3 ComparisonofTSPheuristics . 68 CONTENTS 3 9 Theoretical Analysis of Heuristics 72 9.1 Propertyof2-Optoptimaltours . 72 9.2 ApproximationAnalysis . 73 9.2.1 Travelling Salesman Problem . 73 9.2.2 KnapsackProblem ............................ 75 9.2.3 BinPackingProblem.. .. .. .. .. .. .. .. 76 9.2.4 OnlineProblemsandAlgorithms . 78 9.3 DominationAnalysis .............................. 78 9.4 Questions ..................................... 80 10 Advanced Local Search and Meta-heuristics 82 10.1 Advanced Local Search Techniques . ..... 82 10.2 Meta-heuristics ................................ 83 10.2.1 SimulatedAnnealing . 83 10.2.2 GeneticAlgorithms. 83 10.2.3 TabuSearch ............................... 85 CONTENTS 4 Abstract This notes accompany the final year course CS3490: Computational Optimisation. We will study basic results, approaches and techniques of such important areas as linear and integer programming, and combinatorial optimisation. Many applications are overviewed. This document is c Gregory Gutin, 2005. Permission is given to freely distribute this document electronically and on paper. You may not change this document or incorporate parts of it in other documents: it must be distributed intact. Please send errata to the authors at the address on the title page or electronically to [email protected]. Contents i Chapter 1 Introduction to Computational Optimisation 1.1 Introduction Computational Optimisation (CS3490) will have 3 lectures a week. The aim is to intro- duce classical and modern methods and approaches in computational optimisation, and to overview applications and software packages available. The course covers both classical and very recent developments in the area. The main topics of the course are: linear and integer programming, construction heuristics and local search, polynomial time solvable problems, computational and theoretical analysis of heuristics, and meta-heuristics. Most of the theory will be taught through examples with theoretical results formulated, but not proved. Only a few results will be proved. There will be a final exam (100 % mark). Any material taught at the lectures may be in the exam paper. A basic knowledge of graphs and matrices is assumed. These notes contain areas of blank space in various places. Their purpose is to leave room for examples given in the lectures. Unfortunately, it is impossible to recommend only one or two books covering the whole course. Several books and articles will be used in a supporting role to these notes, including the following: J. Bang-Jensen and G. Gutin, Digraphs: Theory, Algorithms and Applications, • Springer, 2000 J.A. Bondy and U.S.R. Murty, Graph Theory with Applications, North Holland, 1976 • M.W. Carter and C.C. Price, Operations Research: A Practical Introduction, CRC, • 1 1.1. INTRODUCTION 2 2001 F. Glover and M. Laguna, Tabu Search, Kluwer, 1997 • G. Gutin and A. Punnen (eds.), Traveling Salesman Problem and its Variations, • Kluwer, 2002. G. Gutin and A. Yeo, Anti-matroids. Operations Research Letters 30 (2002) 97–99. • J. Hromkoviˇc, Algorithmics for hard problems, Springer, 2001 • Z. Michalewicz and D.B. Fogel, How to Solve It: Modern Heuristics, Springer, 2000 • W.L. Winston, Operations Research, 3rd edition, Duxbury Press, 1994 • Inevitably there are misprints in the notes. Please let me know if you have spotted one. 1.2. ALGORITHM EFFICIENCY AND PROBLEM COMPLEXITY 3 1.2 Algorithm efficiency and problem complexity We start from two particular optimisation problems. Assignment Problem (AP) We have n persons p1,...,pn and n jobs w1,...,wn, and the cost cij of performing job i by person j. We wish to find an assignment of the persons to the jobs (one person per job) such that the total cost of performing the jobs is minimum. The costs are normally given by matrix C = [cij]. Example (an instance): 1 2 3 2 2 5 0 3 C = . 2 1 6 7 3 4 2 1 Travelling Salesman Problem (TSP) There are n cities t1,...,tn. Given distances dij from any city ti to any other city tj, we wish to find a shortest total distance tour that starts at city t1, visits all cities (in some order) and returns to t1. Example (an instance): 0 2 3 2 2 0 1 3 C = . 2 1 0 7 3 4 2 0 The parameter n for both AP and TSP is the size of the problem. Algorithms for most optimisation problem are non-trivial and cannot be performed by hand even for relatively small sizes of a problem. Hence computers have to be used. One possibility to predict that a certain computer code C to solve a certain problem P can handle all instances of P of size, say, n = 100 within a CPU hour is to carry out computational experiments for various instances of P of size 100. However, even if we have carried out computational experiments with 2000 instances of P and C solved each of them with a CPU hour, it does not mean C will spend less than one hour for the 2001st instance. To predict the running time of algorithms and of the corresponding computer codes, researchers and practitioners compute the number of elementary operations required by a given algorithm to solve any instance of a certain problem (depending on the instance size). Elementary operations are arithmetic operations, logic operations, shift, etc. Examples: 1.3. OPTIMALITY AND PRACTICALITY 4 In most cases, we are interested in knowing how the number of performed operations depends on n asymptotically. For example, there is an algorithm AAP for the AP that requires at most O(n3) operations. This means that the number of operations is at most 3 cn , where c is a constant not depending on n. For the TSP there is an algorithm ATSP that requires at most O(2n) operations. To sort n different integers whose values are between 1 and n there is an algorithm ABS (basket sort) that requires at most O(n) operations. We may draw some conclusions on the three algorithms without carrying out any computational experiments. For simplicity, assume that the constant c in each of the algorithms equals 10 and every operation takes 10−6 sec to perform. Then for n = 20, −5 ABS will take at most 2 10 sec, AAP 0.08 sec and ATSP 10 sec. For n = 40, ABS will −5 × take at most 4 10 sec, AAP 0.64 sec and ATSP 127 days. For n = 60, ABS will take −×5 at most 6 10 sec, AAP 2.16 sec and ATSP 366000 years. × Already this example indicates that while ABS and AAP can be used for moderate sizes, ATSP may quickly become unusable. In fact, this example shows the difference between polynomial time and exponential time algorithms. Clearly, polynomial time algorithms are ”good” and exponential are ”bad”. Unfortunately, for many optimisation problems, called NP-hard problems, polynomial time algorithms are unknown. Many 1000s of NP- hard problems are polynomially equivalent in the sense that if one of them admits a polynomial time algorithm then so does every other one. Many researchers have tried to find polynomial algorithms for various NP-hard problems, but failed. Thus, we believe that NP-hard problem cannot have polynomial time algorithms. Since TSP is NP-hard, it is very likely there does not exist a polynomial time algorithm for TSP. 1.3 Optimality and practicality Many people with a mathematical education are trained to search for exact solutions to problems. If we are solving a quadratic equation, there is a formula