A Course in Combinatorial Optimization

Total Page:16

File Type:pdf, Size:1020Kb

A Course in Combinatorial Optimization A Course in Combinatorial Optimization Alexander Schrijver CWI, Kruislaan 413, 1098 SJ Amsterdam, The Netherlands and Department of Mathematics, University of Amsterdam, Plantage Muidergracht 24, 1018 TV Amsterdam, The Netherlands. March 23, 2017 copyright c A. Schrijver Contents 1. Shortest paths and trees 5 1.1. Shortest paths with nonnegative lengths 5 1.2. Speeding up Dijkstra’s algorithm with heaps 9 1.3. Shortest paths with arbitrary lengths 12 1.4. Minimum spanning trees 19 2. Polytopes, polyhedra, Farkas’ lemma, and linear programming 23 2.1. Convex sets 23 2.2. Polytopes and polyhedra 25 2.3. Farkas’ lemma 30 2.4. Linear programming 33 3. Matchings and covers in bipartite graphs 39 3.1. Matchings, covers, and Gallai’s theorem 39 3.2. M-augmenting paths 40 3.3. K˝onig’s theorems 41 3.4. Cardinality bipartite matching algorithm 45 3.5. Weighted bipartite matching 47 3.6. The matching polytope 50 4. Menger’s theorem, flows, and circulations 54 4.1. Menger’s theorem 54 4.2. Flows in networks 58 4.3. Finding a maximum flow 60 4.4. Speeding up the maximum flow algorithm 65 4.5. Circulations 68 4.6. Minimum-cost flows 70 5. Nonbipartite matching 78 5.1. Tutte’s 1-factor theorem and the Tutte-Berge formula 78 5.2. Cardinality matching algorithm 81 5.3. Weighted matching algorithm 85 5.4. The matching polytope 91 5.5. The Cunningham-Marsh formula 94 6. Problems, algorithms, and running time 97 6.1. Introduction 97 6.2. Words 98 6.3. Problems 100 6.4. Algorithms and running time 100 6.5. The class NP 101 6.6. The class co-NP 102 6.7. NP-completeness 103 6.8. NP-completeness of the satisfiability problem 103 6.9. NP-completeness of some other problems 106 6.10. Turing machines 108 7. Cliques, stable sets, and colourings 111 7.1. Introduction 111 7.2. Edge-colourings of bipartite graphs 115 7.3. Partially ordered sets 121 7.4. Perfect graphs 125 7.5. Chordal graphs 128 8. Integer linear programming and totally unimodular matrices 132 8.1. Integer linear programming 132 8.2. Totally unimodular matrices 134 8.3. Totally unimodular matrices from bipartite graphs 139 8.4. Totally unimodular matrices from directed graphs 143 9. Multicommodity flows and disjoint paths 148 9.1. Introduction 148 9.2. Two commodities 153 9.3. Disjoint paths in acyclic directed graphs 157 9.4. Vertex-disjoint paths in planar graphs 159 9.5. Edge-disjoint paths in planar graphs 165 9.6. A column generation technique for multicommodity flows 168 10. Matroids 173 10.1. Matroids and the greedy algorithm 173 10.2. Equivalent axioms for matroids 176 10.3. Examples of matroids 180 10.4. Two technical lemmas 183 10.5. Matroid intersection 184 10.6. Weighted matroid intersection 190 10.7. Matroids and polyhedra 194 References 199 Name index 210 Subject index 212 5 1. Shortest paths and trees 1.1. Shortest paths with nonnegative lengths Let D = (V, A) be a directed graph, and let s,t V .A walk is a sequence P = ∈ (v0,a1,v1,...,am,vm) where ai is an arc from vi−1 to vi for i =1,...,m. If v0,...,vm all are different, P is called a path. If s = v0 and t = vm, the vertices s and t are the starting and end vertex of P , respectively, and P is called an s t walk, and, if P is a path, an s t path. The length of P is m. The distance from− s to t is the minimum length of any− s t path. (If no s t path exists, we set the distance from s to t equal to .) − − ∞ It is not difficult to determine the distance from s to t: Let Vi denote the set of vertices of D at distance i from s. Note that for each i: (1) Vi+1 is equal to the set of vertices v V (V0 V1 Vi) for which (u,v) A for some u V . ∈ \ ∪ ∪···∪ ∈ ∈ i This gives us directly an algorithm for determining the sets V : we set V := s and i 0 { } next we determine with rule (1) the sets V ,V ,... successively, until V = . 1 2 i+1 ∅ In fact, it gives a linear-time algorithm: Theorem 1.1. The algorithm has running time O( A ). | | Proof. Directly from the description. In fact the algorithm finds the distance from s to all vertices reachable from s. Moreover, it gives the shortest paths. These can be described by a rooted (directed) tree T =(V ′,A′), with root s, such that V ′ is the set of vertices reachable in D from s and such that for each u,v V ′, each directed u v path in T is a shortest u v path in D.1 ∈ − − Indeed, when we reach a vertex t in the algorithm, we store the arc by which t is reached. Then at the end of the algorithm, all stored arcs form a rooted tree with this property. There is also a trivial min-max relation characterizing the minimum length of an s t path. To this end, call a subset A′ of A an s t cut if A′ = δout(U) for some subset− U of V satisfying s U and t U.2 Then− the following was observed by Robacker [1956]: ∈ 6∈ 1A rooted tree, with root s, is a directed graph such that the underlying undirected graph is a tree and such that each vertex t = s has indegree 1. Thus each vertex t is reachable from s by a unique directed s t path. 6 2δout(U) and δ−in(U) denote the sets of arcs leaving and entering U, respectively. 6 Chapter 1. Shortest paths and trees Theorem 1.2. The minimum length of an s t path is equal to the maximum number of pairwise disjoint s t cuts. − − Proof. Trivially, the minimum is at least the maximum, since each s t path intersects each s t cut in an arc. The fact that the minimum is equal to the− maximum follows by considering− the s t cuts δout(U ) for i =0,...,d 1, where d is the distance from − i − s to t and where Ui is the set of vertices of distance at most i from s. This can be generalized to the case where arcs have a certain ‘length’. For any ‘length’ function l : A Q+ and any walk P = (v0,a1,v1,...,am,vm), let l(P ) be the length of P . That is:→ m (2) l(P ) := l(ai). i=1 X Now the distance from s to t (with respect to l) is equal to the minimum length of any s t path. If no s t path exists, the distance is + . − − ∞ Again there is an easy algorithm, due to Dijkstra [1959], to find a minimum-length s t path for all t. Start with U := V and set f(s) := 0 and f(v)= if v = s. Next apply− the following iteratively: ∞ 6 (3) Find u U minimizing f(u) over u U. For each a =(u,v) A for which f(v) >f∈(u)+ l(a), reset f(v) := f(u∈)+ l(a). Reset U := U ∈ u . \ { } We stop if U = . Then: ∅ Theorem 1.3. The final function f gives the distances from s. Proof. Let dist(v) denote the distance from s to v, for any vertex v. Trivially, f(v) dist(v) for all v, throughout the iterations. We prove that throughout the iterations,≥ f(v) = dist(v) for each v V U. At the start of the algorithm this is ∈ \ trivial (as U = V ). Consider any iteration (3). It suffices to show that f(u) = dist(u) for the chosen u U. Suppose f(u) > dist(u). Let s = v0,v1,...,vk = u be a shortest s u path. Let∈ i be the smallest index with v U. − i ∈ Then f(vi) = dist(vi). Indeed, if i = 0, then f(vi)= f(s) = 0 = dist(s) = dist(vi). If i> 0, then (as v V U): i−1 ∈ \ (4) f(v ) f(v )+ l(v ,v ) = dist(v )+ l(v ,v ) = dist(v ). i ≤ i−1 i−1 i i−1 i−1 i i This implies f(v ) dist(v ) dist(u) <f(u), contradicting the choice of u. i ≤ i ≤ Section 1.1. Shortest paths with nonnegative lengths 7 Clearly, the number of iterations is V , while each iteration takes O( V ) time. So the algorithm has a running time O( V| 2|). In fact, by storing for each vertex| | v the last arc a for which (3) applied we find| a| rooted tree T = (V ′,A′) with root s such that V ′ is the set of vertices reachable from s and such that if u,v V ′ are such that T contains a directed u v path, then this path is a shortest u ∈v path in D. − − Thus we have: Theorem 1.4. Given a directed graph D = (V, A), s,t V , and a length function l : A Q , a shortest s t path can be found in time O∈( V 2). → + − | | Proof. See above. For an improvement, see Section 1.2. A weighted version of Theorem 1.2 is as follows: Theorem 1.5. Let D = (V, A) be a directed graph, s,t V , and let l : A Z+. Then the minimum length of an s t path is equal to the maximum∈ number k of→s t − − cuts C1,...,Ck (repetition allowed) such that each arc a is in at most l(a) of the cuts Ci.
Recommended publications
  • A Branch-And-Price Approach with Milp Formulation to Modularity Density Maximization on Graphs
    A BRANCH-AND-PRICE APPROACH WITH MILP FORMULATION TO MODULARITY DENSITY MAXIMIZATION ON GRAPHS KEISUKE SATO Signalling and Transport Information Technology Division, Railway Technical Research Institute. 2-8-38 Hikari-cho, Kokubunji-shi, Tokyo 185-8540, Japan YOICHI IZUNAGA Information Systems Research Division, The Institute of Behavioral Sciences. 2-9 Ichigayahonmura-cho, Shinjyuku-ku, Tokyo 162-0845, Japan Abstract. For clustering of an undirected graph, this paper presents an exact algorithm for the maximization of modularity density, a more complicated criterion to overcome drawbacks of the well-known modularity. The problem can be interpreted as the set-partitioning problem, which reminds us of its integer linear programming (ILP) formulation. We provide a branch-and-price framework for solving this ILP, or column generation combined with branch-and-bound. Above all, we formulate the column gen- eration subproblem to be solved repeatedly as a simpler mixed integer linear programming (MILP) problem. Acceleration tech- niques called the set-packing relaxation and the multiple-cutting- planes-at-a-time combined with the MILP formulation enable us to optimize the modularity density for famous test instances in- cluding ones with over 100 vertices in around four minutes by a PC. Our solution method is deterministic and the computation time is not affected by any stochastic behavior. For one of them, column generation at the root node of the branch-and-bound tree arXiv:1705.02961v3 [cs.SI] 27 Jun 2017 provides a fractional upper bound solution and our algorithm finds an integral optimal solution after branching. E-mail addresses: (Keisuke Sato) [email protected], (Yoichi Izunaga) [email protected].
    [Show full text]
  • Prizes and Awards Session
    PRIZES AND AWARDS SESSION Wednesday, July 12, 2021 9:00 AM EDT 2021 SIAM Annual Meeting July 19 – 23, 2021 Held in Virtual Format 1 Table of Contents AWM-SIAM Sonia Kovalevsky Lecture ................................................................................................... 3 George B. Dantzig Prize ............................................................................................................................. 5 George Pólya Prize for Mathematical Exposition .................................................................................... 7 George Pólya Prize in Applied Combinatorics ......................................................................................... 8 I.E. Block Community Lecture .................................................................................................................. 9 John von Neumann Prize ......................................................................................................................... 11 Lagrange Prize in Continuous Optimization .......................................................................................... 13 Ralph E. Kleinman Prize .......................................................................................................................... 15 SIAM Prize for Distinguished Service to the Profession ....................................................................... 17 SIAM Student Paper Prizes ....................................................................................................................
    [Show full text]
  • Mathematical Programming Lecture 19 OR 630 Fall 2005 November 1, 2005 Notes by Nico Diener
    Mathematical Programming Lecture 19 OR 630 Fall 2005 November 1, 2005 Notes by Nico Diener Column and Constraint Generation Idea The column and constraint generation algorithms exploit the fact that the revised simplex algorithm only needs very limited access to data of the LP problem to solve some large-scale LPs. Illustration The one-dimensional cutting-stock problem (Chapter 6 of Bertsimas and Tsit- siklis, Chapters 13 and 26 of Chvatal). Problem A paper manufacturer produces paper in large rolls of width W and has a demand for narrow rolls of widths say w1, ..., wn, where 0 < wi < W . The demand is for bi rolls of width wi. Figure 1: We want to use as few large rolls as possible to satisfy the demand. We consider patterns, i.e., ways of cutting a large roll into smaller rolls: each pattern j corresponds to a vector m aj ∈ R with aij equal to the number of rolls of width wi in pattern j. 1 0 0 Example: a = 1 . 0 . . 1 If we could list all patterns say 1, 2, ..., N (N can be very large), then the problem would become: PN min j=1 xj PN j=1 ajxj = b, (1) xj ≥ 0 ∀j. Actually: We want xj, the number of rolls cut in pattern j, integer but we are just going to consider the LP relaxation. Problem N is large and A (the matrix made of columns aj) is known only implicitly: m PN Any vector a ∈ R satisfying i=1 wiai ≤ W , ai ≥ 0 and integer, defines a column of A.
    [Show full text]
  • The Traveling Preacher Problem, Report No
    Discrete Optimization 5 (2008) 290–292 www.elsevier.com/locate/disopt The travelling preacher, projection, and a lower bound for the stability number of a graph$ Kathie Camerona,∗, Jack Edmondsb a Wilfrid Laurier University, Waterloo, Ontario, Canada N2L 3C5 b EP Institute, Kitchener, Ontario, Canada N2M 2M6 Received 9 November 2005; accepted 27 August 2007 Available online 29 October 2007 In loving memory of George Dantzig Abstract The coflow min–max equality is given a travelling preacher interpretation, and is applied to give a lower bound on the maximum size of a set of vertices, no two of which are joined by an edge. c 2007 Elsevier B.V. All rights reserved. Keywords: Network flow; Circulation; Projection of a polyhedron; Gallai’s conjecture; Stable set; Total dual integrality; Travelling salesman cost allocation game 1. Coflow and the travelling preacher An interpretation and an application prompt us to recall, and hopefully promote, an older combinatorial min–max equality called the Coflow Theorem. Let G be a digraph. For each edge e of G, let de be a non-negative integer. The capacity d(C) of a dicircuit C means the sum of the de’s of the edges in C. An instance of the Coflow Theorem (1982) [2,3] says: Theorem 1. The maximum cardinality of a subset S of the vertices of G such that each dicircuit C of G contains at most d(C) members of S equals the minimum of the sum of the capacities of any subset H of dicircuits of G plus the number of vertices of G which are not in a dicircuit of H.
    [Show full text]
  • Column Generation for Linear and Integer Programming
    Documenta Math. 65 Column Generation for Linear and Integer Programming George L. Nemhauser 2010 Mathematics Subject Classification: 90 Keywords and Phrases: Column generation, decomposition, linear programming, integer programming, set partitioning, branch-and- price 1 The beginning – linear programming Column generation refers to linear programming (LP) algorithms designed to solve problems in which there are a huge number of variables compared to the number of constraints and the simplex algorithm step of determining whether the current basic solution is optimal or finding a variable to enter the basis is done by solving an optimization problem rather than by enumeration. To the best of my knowledge, the idea of using column generation to solve linear programs was first proposed by Ford and Fulkerson [16]. However, I couldn’t find the term column generation in that paper or the subsequent two seminal papers by Dantzig and Wolfe [8] and Gilmore and Gomory [17,18]. The first use of the term that I could find was in [3], a paper with the title “A column generation algorithm for a ship scheduling problem”. Ford and Fulkerson [16] gave a formulation for a multicommodity maximum flow problem in which the variables represented path flows for each commodity. The commodities represent distinct origin-destination pairs and integrality of the flows is not required. This formulation needs a number of variables ex- ponential in the size of the underlying network since the number of paths in a graph is exponential in the size of the network. What motivated them to propose this formulation? A more natural and smaller formulation in terms of the number of constraints plus the numbers of variables is easily obtained by using arc variables rather than path variables.
    [Show full text]
  • A. Schrijver, Honorary Dmath Citation (PDF)
    A. Schrijver, Honorary D. Math Citation Honorary D. Math Citation for Lex Schrijver University of Waterloo Convocation, June, 2002 Mr. Chancellor, I present Alexander Schrijver. One of the world's leading researchers in discrete optimization, Alexander Schrijver is also recognized as the subject's foremost scholar. A typical problem in discrete optimization requires choosing the best solution from a large but finite set of possibilities, for example, the best order in which a salesperson should visit clients so that the route travelled is as short as possible. While ideally we would like to have an efficient procedure to find the best solution, a famous result of the 1970s showed that for very many discrete optimization problems, there is little hope to compute the best solution quickly. In 1981, Alexander Schrijver and his co-workers, Groetschel and Lovasz, revolutionized the field by providing a powerful general tool to identify problems for which efficient procedures do exist. Professor Schrijver's record of establishing groundbreaking results is complemented by an almost legendary reputation for finding clearer, more compact derivations of results of others. This ability, together with high standards of mathematical rigour and historical accuracy, has made him the leading scholar in discrete optimization. His 1986 book, Theory of Linear and Integer Programming, is a landmark, and his forthcoming three- volume work on combinatorial optimization will certainly become one as well. Alexander Schrijver was born and educated in Amsterdam. Since 1989 he has worked at CWI, the National Research Institute for Mathematics and Computer Science of the Netherlands. He is also Professor of Mathematics at the University of Amsterdam.
    [Show full text]
  • Submodular Functions, Optimization, and Applications to Machine Learning — Spring Quarter, Lecture 11 —
    Submodular Functions, Optimization, and Applications to Machine Learning | Spring Quarter, Lecture 11 | http://www.ee.washington.edu/people/faculty/bilmes/classes/ee596b_spring_2016/ Prof. Jeff Bilmes University of Washington, Seattle Department of Electrical Engineering http://melodi.ee.washington.edu/~bilmes May 9th, 2016 f (A) + f (B) f (A B) + f (A B) Clockwise from top left:v Lásló Lovász Jack Edmonds ∪ ∩ Satoru Fujishige George Nemhauser ≥ Laurence Wolsey = f (A ) + 2f (C) + f (B ) = f (A ) +f (C) + f (B ) = f (A B) András Frank r r r r Lloyd Shapley ∩ H. Narayanan Robert Bixby William Cunningham William Tutte Richard Rado Alexander Schrijver Garrett Birkho Hassler Whitney Richard Dedekind Prof. Jeff Bilmes EE596b/Spring 2016/Submodularity - Lecture 11 - May 9th, 2016 F1/59 (pg.1/60) Logistics Review Cumulative Outstanding Reading Read chapters 2 and 3 from Fujishige's book. Read chapter 1 from Fujishige's book. Prof. Jeff Bilmes EE596b/Spring 2016/Submodularity - Lecture 11 - May 9th, 2016 F2/59 (pg.2/60) Logistics Review Announcements, Assignments, and Reminders Homework 4, soon available at our assignment dropbox (https://canvas.uw.edu/courses/1039754/assignments) Homework 3, available at our assignment dropbox (https://canvas.uw.edu/courses/1039754/assignments), due (electronically) Monday (5/2) at 11:55pm. Homework 2, available at our assignment dropbox (https://canvas.uw.edu/courses/1039754/assignments), due (electronically) Monday (4/18) at 11:55pm. Homework 1, available at our assignment dropbox (https://canvas.uw.edu/courses/1039754/assignments), due (electronically) Friday (4/8) at 11:55pm. Weekly Office Hours: Mondays, 3:30-4:30, or by skype or google hangout (set up meeting via our our discussion board (https: //canvas.uw.edu/courses/1039754/discussion_topics)).
    [Show full text]
  • Branch-And-Price: Column Generation for Solving Huge Integer Programs ,Y
    BranchandPrice Column Generation for y Solving Huge Integer Programs Cynthia Barnhart Ellis L Johnson George L Nemhauser Martin WPSavelsb ergh Pamela H Vance School of Industrial and Systems Engineering Georgia Institute of Technology Atlanta GA Abstract We discuss formulations of integer programs with a huge number of variables and their solution by column generation metho ds ie implicit pricing of nonbasic variables to generate new columns or to prove LP optimality at a no de of the branch andb ound tree We present classes of mo dels for whichthisapproach decomp oses the problem provides tighter LP relaxations and eliminates symmetryWethen discuss computational issues and implementation of column generation branchand b ound algorithms including sp ecial branching rules and ecientways to solvethe LP relaxation February Revised May Revised January Intro duction The successful solution of largescale mixed integer programming MIP problems re quires formulations whose linear programming LP relaxations give a go o d approxima Currently at MIT Currently at Auburn University This research has b een supp orted by the following grants and contracts NSF SES NSF and AFORS DDM NSF DDM NSF DMI and IBM MHV y An abbreviated and unrefereed version of this pap er app eared in Barnhart et al tion to the convex hull of feasible solutions In the last decade a great deal of attention has b een given to the branchandcut approach to solving MIPs Homan and Padb erg and Nemhauser and Wolsey give general exp ositions of this metho dology The basic
    [Show full text]
  • Column Generation Algorithms for Nonlinear Optimization, II: Numerical
    Column generation algorithms for nonlinear optimization, II: Numerical investigations Ricardo Garcia-Rodenas a, Angel Marinb'*, Michael Patriksson c,d a Universidad de Castilla La Mancha, Escuela Superior de Informatica, Paseo de la Universidad, 4, 13071 Ciudad Real, Spain b Universidad Politecnica de Madrid, Escuela Tecnica Superior de Ingenieros Aeronauticos, Plaza Cardenal Cisneros, 3, 28040 Madrid, Spain c Department of Mathematical Sciences, Chalmers University of Technology, SE-412 96 Goteborg, Sweden d Department of Mathematical Sciences, University of Gothenburg, SE-412 96 Goteborg, Sweden ABSTRACT Garcia et al. [1] present a class of column generation (CG) algorithms for nonlinear programs. Its main motivation from a theoretical viewpoint is that under some circumstances, finite convergence can be achieved, in much the same way as for the classic simplicial decomposition method; the main practical motivation is that within the class there are certain nonlinear column generation problems that can accelerate the convergence of a solution approach which generates a sequence of feasible points. This algorithm can, for example, accelerate simplicial decomposition schemes by making the subproblems nonlinear. This paper complements the theoretical study on the asymptotic and finite convergence of these methods given in [1] with an experimental study focused on their computational efficiency. Three types of numerical experiments are conducted. The first group of test problems has been designed to study the parameters involved in these methods. The second group has been designed to investigate the role and the computation of the prolongation of the generated columns to the relative boundary. The last one has been designed to carry out a more complete investigation of the difference in computational efficiency between linear and nonlinear column generation approaches.
    [Show full text]
  • Arxiv:1907.01541V2 [Math.OC] 1 Jun 2020 to Which Positive Mass Is Associated), but Are Supported on the Same Structured Set (A Pixel Grid)
    A Column Generation Approach to the Discrete Barycenter Problem Steffen Borgwardt1 and Stephan Patterson2 1 [email protected]; University of Colorado Denver 2 [email protected]; University of Colorado Denver Abstract. The discrete Wasserstein barycenter problem is a minimum-cost mass transport problem for a set of discrete probability measures. Although a barycenter is computable through linear pro- gramming, the underlying linear program can be extremely large. For worst-case input, the best known linear programming formulation is exponential in the number of variables but contains an extremely low number of constraints, making it a prime candidate for column generation. In this paper, we devise a column generation strategy through a Dantzig-Wolfe decomposition of the linear program. Critical to the success of an efficient computation, we produce efficiently solvable subproblems, namely, a pricing problem in the form of a classical transportation problem. Further, we describe two efficient methods for constructing an initial feasible restricted master problem, and we use the structure of the constraints for a memory-efficient implementation. We support our algorithm with some computational experiments that demonstrate significant improvement in the memory requirements and running times compared to a computation using the full linear program. Keywords: discrete barycenter, optimal transport, linear programming, column generation MSC 2010: 49M27, 90B80, 90C05, 90C08, 90C46 1 Introduction Optimal transport problems involving joint transport to a set of probability measures appear in a variety of fields, including recent work in image processing [15,22], machine learning [14,21,27], and graph theory [25], to name but a few. A barycenter is another probability measure which minimizes the total distance to all input measures (i.e, images) and acts as an average distribution in the probability space.
    [Show full text]
  • Column Generation Algorithms for Nonlinear Optimization, I: Convergence Analysis
    Column generation algorithms for nonlinear optimization, I: Convergence analysis Ricardo Garc´ıa,∗ Angel Mar´ın,† and Michael Patriksson‡ October 10, 2002 Abstract Column generation is an increasingly popular basic tool for the solution of large-scale mathematical programming problems. As problems being solved grow bigger, column gener- ation may however become less efficient in its present form, where columns typically are not optimizing, and finding an optimal solution instead entails finding an optimal convex com- bination of a huge number of them. We present a class of column generation algorithms in which the columns defining the restricted master problem may be chosen to be optimizing in the limit, thereby reducing the total number of columns needed. This first paper is devoted to the convergence properties of the algorithm class, and includes global (asymptotic) con- vergence results for differentiable minimization, finite convergence results with respect to the optimal face and the optimal solution, and extensions of these results to variational inequal- ity problems. An illustration of its possibilities is made on a nonlinear network flow model, contrasting its convergence characteristics to that of the restricted simplicial decomposition (RSD) algorithm. Key words: Column generation, simplicial decomposition, inner approximation, sim- plices, column dropping, optimal face, non-degeneracy, pseudo-convex minimization, weak sharpness, finite convergence, pseudo-monotone+ mapping, variational inequalities. 1 Introduction 1.1 Origin and use Column generation (CG for short) is a classic and increasingly popular means to attack large- scale problems in mathematical programming. A column generation algorithm can, roughly, be said to proceed according to the following two steps, used iteratively in succession: (1) A relaxation of the original problem is constructed, based on current estimates of the optimal solution.
    [Show full text]
  • Curriculum Vitae Martin Groetschel
    MARTIN GRÖTSCHEL detailed information at http://www.zib.de/groetschel Date / Place of birth: Sept. 10, 1948, Schwelm, Germany Family: Married since 1976, 3 daughters Affiliation: Full Professor at President of Technische Universität Berlin (TU) Zuse Institute Berlin (ZIB) Institut für Mathematik Takustr. 7 Str. des 17. Juni 136 D-14195 Berlin, Germany D-10623 Berlin, Germany Phone: +49 30 84185 210 Phone: +49 30 314 23266 E-mail: [email protected] Selected (major) functions in scientific organizations and in scientific service: Current functions: since 1999 Member of the Executive Committee of the IMU since 2001 Member of the Executive Board of the Berlin-Brandenburg Academy of Science since 2007 Secretary of the International Mathematical Union (IMU) since 2011 Chair of the Einstein Foundation, Berlin since 2012 Coordinator of the Forschungscampus Modal Former functions: 1984 – 1986 Dean of the Faculty of Science, U Augsburg 1985 – 1988 Member of the Executive Committee of the MPS 1989 – 1996 Member of the Executive Committee of the DMV, President 1992-1993 2002 – 2008 Chair of the DFG Research Center MATHEON "Mathematics for key technologies" Scientific vita: 1969 – 1973 Mathematics and economics studies (Diplom/master), U Bochum 1977 PhD in economics, U Bonn 1981 Habilitation in operations research, U Bonn 1973 – 1982 Research Assistant, U Bonn 1982 – 1991 Full Professor (C4), U Augsburg (Chair for Applied Mathematics) since 1991 Full Professor (C4), TU Berlin (Chair for Information Technology) 1991 – 2012 Vice president of Zuse Institute Berlin (ZIB) since 2012 President of ZIB Selected awards: 1982 Fulkerson Prize of the American Math. Society and Math. Programming Soc.
    [Show full text]