An Effective Deflected Subgradient Optimization Scheme for Implementing Column Generation for Large-Scale Airline Crew Scheduling Problems

Total Page:16

File Type:pdf, Size:1020Kb

An Effective Deflected Subgradient Optimization Scheme for Implementing Column Generation for Large-Scale Airline Crew Scheduling Problems This article was downloaded by: [128.173.125.76] On: 21 February 2014, At: 12:09 Publisher: Institute for Operations Research and the Management Sciences (INFORMS) INFORMS is located in Maryland, USA INFORMS Journal on Computing Publication details, including instructions for authors and subscription information: http://pubsonline.informs.org An Effective Deflected Subgradient Optimization Scheme for Implementing Column Generation for Large-Scale Airline Crew Scheduling Problems Shivaram Subramanian, Hanif D. Sherali, To cite this article: Shivaram Subramanian, Hanif D. Sherali, (2008) An Effective Deflected Subgradient Optimization Scheme for Implementing Column Generation for Large-Scale Airline Crew Scheduling Problems. INFORMS Journal on Computing 20(4):565-578. http:// dx.doi.org/10.1287/ijoc.1080.0267 Full terms and conditions of use: http://pubsonline.informs.org/page/terms-and-conditions This article may be used only for the purposes of research, teaching, and/or private study. Commercial use or systematic downloading (by robots or other automatic processes) is prohibited without explicit Publisher approval. For more information, contact [email protected]. The Publisher does not warrant or guarantee the article’s accuracy, completeness, merchantability, fitness for a particular purpose, or non-infringement. Descriptions of, or references to, products or publications, or inclusion of an advertisement in this article, neither constitutes nor implies a guarantee, endorsement, or support of claims made of that product, publication, or service. Copyright © 2008, INFORMS Please scroll down for article—it is on subsequent pages INFORMS is the largest professional society in the world for professionals in the fields of operations research, management science, and analytics. For more information on INFORMS, its publications, membership, or meetings visit http://www.informs.org INFORMS Journal on Computing informs Vol. 20, No. 4, Fall 2008, pp. 565–578 ® issn 1091-9856 eissn 1526-5528 08 2004 0565 doi 10.1287/ijoc.1080.0267 © 2008 INFORMS AnEffective Deflected SubgradientOptimization Scheme for Implementing Column Generation for Large-Scale Airline Crew Scheduling Problems Shivaram Subramanian Enterprise Optimization, United Airlines (WHQKB), Elk Grove Village, Illinois 60007, [email protected] Hanif D. Sherali Grado Department of Industrial and Systems Engineering, Virginia Polytechnic Institute and State University, Blacksburg, Virginia 24061, [email protected] e present a new deflected subgradient scheme for generating good quality dual solutions for linear pro- Wgramming (LP) problems and utilize this within the context of large-scale airline crew planning problems that arise inpractice. The motivationfor the developmentof this method came from the failure of a black-box- type approach implemented at United Airlines for solving such problems using column generation in concert with a commercial LP solver, where the software was observed to stall while yet remote from optimality. We identify a phenomenon called dual noise to explain this stalling behavior and present an analysis of the desirable properties of dual solutions in this context. The proposed deflected subgradient approach has been embed- ded within the crew pairing solver at United Airlines and tested using historical data sets. Our computational experience suggests a strong correlation between the dual noise phenomenon and the quality of the final solu- tion produced, as well as with the accompanying algorithmic performance. Although we observed that our deflected subgradientscheme yielded anaverage speed-up factor of 10 for the columngenerationschemeover the commercial solver, the average reductioninthe optimality gap over the same numberof iterationswas better by a factor of 26, alongwith anaverage reductioninthe dual noiseby a factor of 30. The results from the column generation implementation suggest that significant benefits can be obtained by using the deflected subgradient-based scheme instead of a black-box-type or standard solver approach to solve the intermediate linearprograms that arise withinthe columngenerationscheme. Key words: airline crew planning; set-partitioning problems; deflected subgradient approach; dual noise; variable target value method History: Accepted by William Cook, former Area Editor for Design and Analysis of Algorithms; received February 2006; revised April 2007; accepted November 2007. Published online in Articles in Advance May 15, 2008. 1. Introduction of some r inequality restrictions are side constraints Consider the following discrete optimization prob- that arise out of certain designated goals pertaining lem, which represents a set-partitioning problem with to alternative multiobjective requirements, beyond the side constraints (SPP-SC) (see Nemhauser and Wolsey cost-minimization objective of SPP-SC. 1999): A typical airline planning instance of SPP-SC is the multiobjective airline crew pairing optimization prob- SPP-SC: Minimize cT x lem, where the columns Aj of A are generated to sat- subject to Ax = e isfy delineated crew pairing legality restrictions. More (1) Dx ≤ d specifically, this problem seeks anoptimal assign- ment of crew pairings to different flight segments, x ∈ Bn Downloaded from informs.org by [128.173.125.76] on 21 February 2014, at 12:09 . For personal use only, all rights reserved. subject to various contractual and FAA-mandated Here, the variables x ∈ Bn represent binary decision crew scheduling regulations. In practice, airlines are variables, with anassociated cost vector c. The first set interested in finding a (near-optimal) solution that of equality constraints in problem SPP-SC represent would satisfy certain utilization, manpower availabil- ity, and quality-of-work-life (QWL) requirements such set-partitioning constraints with the coefficients aij of the m × n matrix A being either zero or one, depend- as schedule regularity, inadditionto minimizingcrew ing on whether variable xj participates in constraint i costs that include hotel, per diem, and traditional met- or not, and where e is a vector of ones. The second set rics such as flight-time-credit (FTC), and a variety of 565 Subramanian and Sherali: Effective Deflected Subgradient Optimization Scheme 566 INFORMS Journal on Computing 20(4), pp. 565–578, © 2008 INFORMS other segment and duty-based penalties and incen- group of Boeing, Inc. reports results that demonstrate tives. Vance et al. (1997) provide more details on these their considerable success in solving massively sized crew pairing costs, and Klabjan et al. (2001, 2002) railroad crew scheduling problems using column gen- consider such crew pairing problems with additional erationtechniques(Kohl 2003). regularity and time-window restrictions, as well as Given the nature of the problem and various other include some aspects related to the aircraft-routing business requirements, a sequential planning process problem. has been traditionally followed, starting with a gross- We will restrict the scope of our study inthis paper day approximationwhere every flight is assumed to to the pure set-partitioning problem SPP, whose struc- operate every day, and ending with a final solution ture poses the mainchallengeinsolvinglarge-scale that represents crew pairings that cover each of the instances to near optimality. The current analysis can flight segments by date. However, recent advances be readily extended to deal with the more restricted in computer hardware and new column generation and realistic model SPP-SC. In practice, to main- algorithms and techniques have facilitated obviat- tain feasibility, we add nonnegative slack variables s, ing traditional sequential and suboptimal crew plan- along with a corresponding large positive undercov- ning approaches, and to directly solve the multiob- erage penalty P to the objective function, where P is jective crew scheduling problem in its entirety within typically chosento be considerablylarger thanthe a reasonable amount of computational time. Further- cost of any legal pairing that can be generated for more, this success has enabled the solution of large the givenproblem. This results inthe followingaug- optimizationproblems that arise from integratedair- mented formulation of SPP: line planning models that cross traditional boundaries SPP: Minimize cT x + Ps between different airline business areas. The problem instances we consider in this paper subject to Ax + s = e (2) are from weekly and monthly crew pairing prob- s ≥ 0x∈ Bn lems that pertain to United Airlines’s flight sched- ules. These are large-scale SPP instances that directly We now present a brief summary of relevant crew address a typical week’s abstracted problem or the pairing concepts used in crew scheduling applica- scheduling problem for an entire month. The lat- tions. A crew duty represents an approximate day’s ter choice is made whena large numberof dated worth of work and consists of a legal sequence of exceptions are present in the schedule, which usu- flight segments and flight connections. Layovers repre- ally occurs during holiday seasons or during months sent rest periods between successive duty schedules that contain several scheduling changes. In fact, the and these accrue both crew and hotel costs. A crew intensely competitive nature of the airline industry pairing is a sequence of crew duties, along with any has typically resulted inanincrease inthe frequency intermediate layovers, which begins and ends at the of such demand-driven dynamic scheduling changes same
Recommended publications
  • A Branch-And-Price Approach with Milp Formulation to Modularity Density Maximization on Graphs
    A BRANCH-AND-PRICE APPROACH WITH MILP FORMULATION TO MODULARITY DENSITY MAXIMIZATION ON GRAPHS KEISUKE SATO Signalling and Transport Information Technology Division, Railway Technical Research Institute. 2-8-38 Hikari-cho, Kokubunji-shi, Tokyo 185-8540, Japan YOICHI IZUNAGA Information Systems Research Division, The Institute of Behavioral Sciences. 2-9 Ichigayahonmura-cho, Shinjyuku-ku, Tokyo 162-0845, Japan Abstract. For clustering of an undirected graph, this paper presents an exact algorithm for the maximization of modularity density, a more complicated criterion to overcome drawbacks of the well-known modularity. The problem can be interpreted as the set-partitioning problem, which reminds us of its integer linear programming (ILP) formulation. We provide a branch-and-price framework for solving this ILP, or column generation combined with branch-and-bound. Above all, we formulate the column gen- eration subproblem to be solved repeatedly as a simpler mixed integer linear programming (MILP) problem. Acceleration tech- niques called the set-packing relaxation and the multiple-cutting- planes-at-a-time combined with the MILP formulation enable us to optimize the modularity density for famous test instances in- cluding ones with over 100 vertices in around four minutes by a PC. Our solution method is deterministic and the computation time is not affected by any stochastic behavior. For one of them, column generation at the root node of the branch-and-bound tree arXiv:1705.02961v3 [cs.SI] 27 Jun 2017 provides a fractional upper bound solution and our algorithm finds an integral optimal solution after branching. E-mail addresses: (Keisuke Sato) [email protected], (Yoichi Izunaga) [email protected].
    [Show full text]
  • A Lagrangian Decomposition Approach Combined with Metaheuristics for the Knapsack Constrained Maximum Spanning Tree Problem
    MASTERARBEIT A Lagrangian Decomposition Approach Combined with Metaheuristics for the Knapsack Constrained Maximum Spanning Tree Problem ausgeführt am Institut für Computergraphik und Algorithmen der Technischen Universität Wien unter der Anleitung von Univ.-Prof. Dipl.-Ing. Dr.techn. Günther Raidl und Univ.-Ass. Dipl.-Ing. Dr.techn. Jakob Puchinger durch Sandro Pirkwieser, Bakk.techn. Matrikelnummer 0116200 Simmeringer Hauptstraße 50/30, A-1110 Wien Datum Unterschrift Abstract This master’s thesis deals with solving the Knapsack Constrained Maximum Spanning Tree (KCMST) problem, which is a so far less addressed NP-hard combinatorial optimization problem belonging to the area of network design. Thereby sought is a spanning tree whose profit is maximal, but at the same time its total weight must not exceed a specified value. For this purpose a Lagrangian decomposition approach, which is a special variant of La- grangian relaxation, is applied to derive upper bounds. In the course of the application the problem is split up in two subproblems, which are likewise to be maximized but easier to solve on its own. The subgradient optimization method as well as the Volume Algorithm are deployed to solve the Lagrangian dual problem. To derive according lower bounds, i.e. feasible solutions, a simple Lagrangian heuristic is applied which is strengthened by a problem specific local search. Furthermore an Evolutionary Algorithm is presented which uses a suitable encoding for the solutions and appropriate operators, whereas the latter are able to utilize heuristics based on defined edge-profits. It is shown that simple edge-profits, derived straightforward from the data given by an instance, are of no benefit.
    [Show full text]
  • Chapter 8 Stochastic Gradient / Subgradient Methods
    Chapter 8 Stochastic gradient / subgradient methods Contents (class version) 8.0 Introduction........................................ 8.2 8.1 The subgradient method................................. 8.5 Subgradients and subdifferentials................................. 8.5 Properties of subdifferentials.................................... 8.7 Convergence of the subgradient method.............................. 8.10 8.2 Example: Hinge loss with 1-norm regularizer for binary classifier design...... 8.17 8.3 Incremental (sub)gradient method............................ 8.19 Incremental (sub)gradient method................................. 8.21 8.4 Stochastic gradient (SG) method............................. 8.23 SG update.............................................. 8.23 Stochastic gradient algorithm: convergence analysis....................... 8.26 Variance reduction: overview................................... 8.33 Momentum............................................. 8.35 Adaptive step-sizes......................................... 8.37 8.5 Example: X-ray CT reconstruction........................... 8.41 8.1 © J. Fessler, April 12, 2020, 17:55 (class version) 8.2 8.6 Summary.......................................... 8.50 8.0 Introduction This chapter describes two families of algorithms: • subgradient methods • stochastic gradient methods aka stochastic gradient descent methods Often we turn to these methods as a “last resort,” for applications where none of the methods discussed previously are suitable. Many machine learning applications,
    [Show full text]
  • Subgradient Method
    Subgradient Method Ryan Tibshirani Convex Optimization 10-725/36-725 Last last time: gradient descent Consider the problem min f(x) x n for f convex and differentiable, dom(f) = R . Gradient descent: (0) n choose initial x 2 R , repeat (k) (k−1) (k−1) x = x − tk · rf(x ); k = 1; 2; 3;::: Step sizes tk chosen to be fixed and small, or by backtracking line search If rf Lipschitz, gradient descent has convergence rate O(1/) Downsides: • Requires f differentiable this lecture • Can be slow to converge next lecture 2 Subgradient method n Now consider f convex, with dom(f) = R , but not necessarily differentiable Subgradient method: like gradient descent, but replacing gradients with subgradients. I.e., initialize x(0), repeat (k) (k−1) (k−1) x = x − tk · g ; k = 1; 2; 3;::: where g(k−1) 2 @f(x(k−1)), any subgradient of f at x(k−1) Subgradient method is not necessarily a descent method, so we (k) (0) (k) keep track of best iterate xbest among x ; : : : x so far, i.e., f(x(k) ) = min f(x(i)) best i=0;:::k 3 Outline Today: • How to choose step sizes • Convergence analysis • Intersection of sets • Stochastic subgradient method 4 Step size choices • Fixed step sizes: tk = t all k = 1; 2; 3;::: • Diminishing step sizes: choose to meet conditions 1 1 X 2 X tk < 1; tk = 1; k=1 k=1 i.e., square summable but not summable Important that step sizes go to zero, but not too fast Other options too, but important difference to gradient descent: all step sizes options are pre-specified, not adaptively computed 5 Convergence analysis n Assume that f convex, dom(f) = R , and also that f is Lipschitz continuous with constant G > 0, i.e., jf(x) − f(y)j ≤ Gkx − yk2 for all x; y Theorem: For a fixed step size t, subgradient method satisfies lim f(x(k) ) ≤ f ? + G2t=2 k!1 best Theorem: For diminishing step sizes, subgradient method sat- isfies lim f(x(k) ) = f ? k!1 best 6 Basic bound, and convergence rate (0) ? Letting R = kx − x k2, after k steps, we have the basic bound R2 + G2 Pk t2 f(x(k) ) − f(x?) ≤ i=1 i best Pk 2 i=1 ti Previous theorems follow from this.
    [Show full text]
  • Mathematical Programming Lecture 19 OR 630 Fall 2005 November 1, 2005 Notes by Nico Diener
    Mathematical Programming Lecture 19 OR 630 Fall 2005 November 1, 2005 Notes by Nico Diener Column and Constraint Generation Idea The column and constraint generation algorithms exploit the fact that the revised simplex algorithm only needs very limited access to data of the LP problem to solve some large-scale LPs. Illustration The one-dimensional cutting-stock problem (Chapter 6 of Bertsimas and Tsit- siklis, Chapters 13 and 26 of Chvatal). Problem A paper manufacturer produces paper in large rolls of width W and has a demand for narrow rolls of widths say w1, ..., wn, where 0 < wi < W . The demand is for bi rolls of width wi. Figure 1: We want to use as few large rolls as possible to satisfy the demand. We consider patterns, i.e., ways of cutting a large roll into smaller rolls: each pattern j corresponds to a vector m aj ∈ R with aij equal to the number of rolls of width wi in pattern j. 1 0 0 Example: a = 1 . 0 . . 1 If we could list all patterns say 1, 2, ..., N (N can be very large), then the problem would become: PN min j=1 xj PN j=1 ajxj = b, (1) xj ≥ 0 ∀j. Actually: We want xj, the number of rolls cut in pattern j, integer but we are just going to consider the LP relaxation. Problem N is large and A (the matrix made of columns aj) is known only implicitly: m PN Any vector a ∈ R satisfying i=1 wiai ≤ W , ai ≥ 0 and integer, defines a column of A.
    [Show full text]
  • An Efficient Solution Methodology for Mixed-Integer Programming Problems Arising in Power Systems" (2016)
    University of Connecticut OpenCommons@UConn Doctoral Dissertations University of Connecticut Graduate School 12-15-2016 An Efficient Solution Methodology for Mixed- Integer Programming Problems Arising in Power Systems Mikhail Bragin University of Connecticut - Storrs, [email protected] Follow this and additional works at: https://opencommons.uconn.edu/dissertations Recommended Citation Bragin, Mikhail, "An Efficient Solution Methodology for Mixed-Integer Programming Problems Arising in Power Systems" (2016). Doctoral Dissertations. 1318. https://opencommons.uconn.edu/dissertations/1318 An Efficient Solution Methodology for Mixed-Integer Programming Problems Arising in Power Systems Mikhail Bragin, PhD University of Connecticut, 2016 For many important mixed-integer programming (MIP) problems, the goal is to obtain near- optimal solutions with quantifiable quality in a computationally efficient manner (within, e.g., 5, 10 or 20 minutes). A traditional method to solve such problems has been Lagrangian relaxation, but the method suffers from zigzagging of multipliers and slow convergence. When solving mixed-integer linear programming (MILP) problems, the recently adopted branch-and-cut may also suffer from slow convergence because when the convex hull of the problems has complicated facial structures, facet- defining cuts are typically difficult to obtain, and the method relies mostly on time-consuming branching operations. In this thesis, the novel Surrogate Lagrangian Relaxation method is developed and its convergence is proved to the optimal multipliers, without the knowledge of the optimal dual value and without fully optimizing the relaxed problem. Moreover, for practical implementations a stepsizing formula, that guarantees convergence without requiring the optimal dual value, has been constructively developed. The key idea is to select stepsizes in a way that distances between Lagrange multipliers at consecutive iterations decrease, and as a result, Lagrange multipliers converge to a unique limit.
    [Show full text]
  • Column Generation for Linear and Integer Programming
    Documenta Math. 65 Column Generation for Linear and Integer Programming George L. Nemhauser 2010 Mathematics Subject Classification: 90 Keywords and Phrases: Column generation, decomposition, linear programming, integer programming, set partitioning, branch-and- price 1 The beginning – linear programming Column generation refers to linear programming (LP) algorithms designed to solve problems in which there are a huge number of variables compared to the number of constraints and the simplex algorithm step of determining whether the current basic solution is optimal or finding a variable to enter the basis is done by solving an optimization problem rather than by enumeration. To the best of my knowledge, the idea of using column generation to solve linear programs was first proposed by Ford and Fulkerson [16]. However, I couldn’t find the term column generation in that paper or the subsequent two seminal papers by Dantzig and Wolfe [8] and Gilmore and Gomory [17,18]. The first use of the term that I could find was in [3], a paper with the title “A column generation algorithm for a ship scheduling problem”. Ford and Fulkerson [16] gave a formulation for a multicommodity maximum flow problem in which the variables represented path flows for each commodity. The commodities represent distinct origin-destination pairs and integrality of the flows is not required. This formulation needs a number of variables ex- ponential in the size of the underlying network since the number of paths in a graph is exponential in the size of the network. What motivated them to propose this formulation? A more natural and smaller formulation in terms of the number of constraints plus the numbers of variables is easily obtained by using arc variables rather than path variables.
    [Show full text]
  • Subgradient Methods
    Subgradient Methods Stephen Boyd and Almir Mutapcic Notes for EE364b, Stanford University, Winter 2006-07 April 13, 2008 Contents 1 Introduction 2 2 Basic subgradient method 2 2.1 Negativesubgradientupdate. .... 2 2.2 Stepsizerules................................... 3 2.3 Convergenceresults.............................. .. 4 3 Convergence proof 4 3.1 Assumptions.................................... 4 3.2 Somebasicinequalities . ... 5 3.3 Aboundonthesuboptimalitybound . ... 7 3.4 Astoppingcriterion.............................. .. 8 3.5 Numericalexample ................................ 8 4 Alternating projections 9 4.1 Optimal step size choice when f ⋆ isknown................... 9 4.2 Finding a point in the intersection of convex sets . ......... 11 4.3 Solving convex inequalities . ..... 14 4.4 Positive semidefinite matrix completion . ........ 15 5 Projected subgradient method 16 5.1 Numericalexample ................................ 18 6 Projected subgradient for dual problem 18 6.1 Numericalexample ................................ 20 7 Subgradient method for constrained optimization 21 7.1 Numericalexample ................................ 24 8 Speeding up subgradient methods 24 1 1 Introduction The subgradient method is a very simple algorithm for minimizing a nondifferentiable convex function. The method looks very much like the ordinary gradient method for differentiable functions, but with several notable exceptions: The subgradient method applies directly to nondifferentiable f. • The step lengths are not chosen via a line search, as in the ordinary gradient method. • In the most common cases, the step lengths are fixed ahead of time. Unlike the ordinary gradient method, the subgradient method is not a descent method; • the function value can (and often does) increase. The subgradient method is readily extended to handle problems with constraints. Subgradient methods can be much slower than interior-point methods (or Newton’s method in the unconstrained case).
    [Show full text]
  • 1 Base Polytopes
    6.883 Learning with Combinatorial Structure Note for Lecture 8 Authors: Hongyi Zhang A bit of history in combinatorial optimization. Last time we talked about the Lovász extension, which plays an important role in the optimization of submodular functions. Actually, despite the name Lovász, Jack Edmonds is another important figure who made huge contribution to this concept, and submodular optimization in general. In DISCML 20111, there was a brief introduction to Jack Edmonds’ contributions to mathematics and computer science by Jeff Bilmes. 1 Base polytopes 1.1 Examples of base polytopes Recall that in the last lecture we defined the base polytope BF of a submodular function, and showed that there is a greedy algorithm to solve the optimization problem max y>x y2BF and the solution is closely related to Lovász extension. Now we shall look at several other examples where interesting concepts turn out to be the base polytopes of some submodu- lar (or supermodular) functions. 1.1.1 Probability simplex Let the ground set be V such that jVj = n. Define F (S) = minfjSj; 1g;S ⊆ V, in last lecture we already saw that F (S) is submodular. It is easy to check that BF is a probability sim- plex. In fact, by definition ( ) X X BF = y : yi = F (V) and yi ≤ F (S) 8S ⊆ V i2V i2S ( n ) X = y : yi = 1 and yi ≥ 0; 8i i=1 n which is a standard (n−1)-simplex in R . 1http://las.ethz.ch/discml/discml11.html NIPS Workshop on Discrete Optimization in Ma- chine Learning 2011 1 1.1.2 Permutahedron PjSj Let the ground set be V such that jVj = n.
    [Show full text]
  • On the Links Between Probabilistic Graphical Models and Submodular Optimisation Senanayak Sesh Kumar Karri
    On the Links between Probabilistic Graphical Models and Submodular Optimisation Senanayak Sesh Kumar Karri To cite this version: Senanayak Sesh Kumar Karri. On the Links between Probabilistic Graphical Models and Submodular Optimisation. Machine Learning [cs.LG]. Université Paris sciences et lettres, 2016. English. NNT : 2016PSLEE047. tel-01753810 HAL Id: tel-01753810 https://tel.archives-ouvertes.fr/tel-01753810 Submitted on 29 Mar 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. THESE` DE DOCTORAT de l’Universite´ de recherche Paris Sciences Lettres PSL Research University Prepar´ ee´ a` l’Ecole´ normale superieure´ On the Links between Probabilistic Graphical Models and Submodular Optimisation Liens entre modeles` graphiques probabilistes et optimisation sous-modulaire Ecole´ doctorale n◦386 ECOLE´ DOCTORALE DE SCIENCES MATHEMATIQUES´ DE PARIS CENTRE Specialit´ e´ INFORMATIQUE COMPOSITION DU JURY : M Andreas Krause ETH Zurich, Rapporteur M Nikos Komodakis ENPC Paris, Rapporteur M Francis Bach Inria Paris, Directeur de these` Soutenue par Senanayak Sesh Kumar KARRI le 27.09.2016 M Josef Sivic ENS Paris, Membre du Jury Dirigee´ par Francis BACH M Antonin Chambolle CMAP EP Paris, Membre du Jury M Guillaume Obozinski ENPC Paris, Membre du Jury RESEARCH UNIVERSITY PARIS ÉCOLENORMALE SUPÉRIEURE What is the purpose of life? Proof and Conjecture, ....
    [Show full text]
  • Branch-And-Price: Column Generation for Solving Huge Integer Programs ,Y
    BranchandPrice Column Generation for y Solving Huge Integer Programs Cynthia Barnhart Ellis L Johnson George L Nemhauser Martin WPSavelsb ergh Pamela H Vance School of Industrial and Systems Engineering Georgia Institute of Technology Atlanta GA Abstract We discuss formulations of integer programs with a huge number of variables and their solution by column generation metho ds ie implicit pricing of nonbasic variables to generate new columns or to prove LP optimality at a no de of the branch andb ound tree We present classes of mo dels for whichthisapproach decomp oses the problem provides tighter LP relaxations and eliminates symmetryWethen discuss computational issues and implementation of column generation branchand b ound algorithms including sp ecial branching rules and ecientways to solvethe LP relaxation February Revised May Revised January Intro duction The successful solution of largescale mixed integer programming MIP problems re quires formulations whose linear programming LP relaxations give a go o d approxima Currently at MIT Currently at Auburn University This research has b een supp orted by the following grants and contracts NSF SES NSF and AFORS DDM NSF DDM NSF DMI and IBM MHV y An abbreviated and unrefereed version of this pap er app eared in Barnhart et al tion to the convex hull of feasible solutions In the last decade a great deal of attention has b een given to the branchandcut approach to solving MIPs Homan and Padb erg and Nemhauser and Wolsey give general exp ositions of this metho dology The basic
    [Show full text]
  • Column Generation Algorithms for Nonlinear Optimization, II: Numerical
    Column generation algorithms for nonlinear optimization, II: Numerical investigations Ricardo Garcia-Rodenas a, Angel Marinb'*, Michael Patriksson c,d a Universidad de Castilla La Mancha, Escuela Superior de Informatica, Paseo de la Universidad, 4, 13071 Ciudad Real, Spain b Universidad Politecnica de Madrid, Escuela Tecnica Superior de Ingenieros Aeronauticos, Plaza Cardenal Cisneros, 3, 28040 Madrid, Spain c Department of Mathematical Sciences, Chalmers University of Technology, SE-412 96 Goteborg, Sweden d Department of Mathematical Sciences, University of Gothenburg, SE-412 96 Goteborg, Sweden ABSTRACT Garcia et al. [1] present a class of column generation (CG) algorithms for nonlinear programs. Its main motivation from a theoretical viewpoint is that under some circumstances, finite convergence can be achieved, in much the same way as for the classic simplicial decomposition method; the main practical motivation is that within the class there are certain nonlinear column generation problems that can accelerate the convergence of a solution approach which generates a sequence of feasible points. This algorithm can, for example, accelerate simplicial decomposition schemes by making the subproblems nonlinear. This paper complements the theoretical study on the asymptotic and finite convergence of these methods given in [1] with an experimental study focused on their computational efficiency. Three types of numerical experiments are conducted. The first group of test problems has been designed to study the parameters involved in these methods. The second group has been designed to investigate the role and the computation of the prolongation of the generated columns to the relative boundary. The last one has been designed to carry out a more complete investigation of the difference in computational efficiency between linear and nonlinear column generation approaches.
    [Show full text]