A Branch-And-Price Algorithm for the Generalized Assignment Problem

Total Page:16

File Type:pdf, Size:1020Kb

A Branch-And-Price Algorithm for the Generalized Assignment Problem A BranchandPrice Algorithm for the Generalized Assignment Problem Martin Savelsb ergh Georgia Institute of Technology School of Industrial and Systems Engineering Atlanta GA USA Abstract The generalized assignment problem examines the maximum prot assignmentof jobs to agents such that each job is assigned to precisely one agent sub ject to capacity restrictions on the agents A new algorithm for the generalized assignment problem is presented that employs b oth column generation and branchandb ound to obtain optimal integer solutions to a set partitioning formulation of the problem July Revised Novemb er Revised Octob er Intro duction The Generalized Assignment Problem GAP examines the maximum prot assignment of n jobs to m agents suchthateach job is assigned to precisely one agent sub ject to capacity restrictions on the agents Although interesting and useful in its own right its main imp ortance stems from the fact that it app ears as a substructure in manymodels develop ed to solve realworld problems in areas suchasvehicle routing plant lo cation resource scheduling and exible manufacturing systems The GAP is easily shown to b e NPhard and a considerable b o dy of literature exists on the search for eectiveenumeration algorithms to solve problems of a reasonable size to optimality Ross and Soland Martello and Toth Fisher Jaikumar and Van Wassenhove Guignard and Rosenwein Karabakal Bean and Lohmann A recent survey by Cattrysse and Van Wassenhove provides a comprehensive treatment of most of these metho ds In this pap er we present an algorithm for the GAP that employs b oth column generation and branchandb ound to obtain optimal integer solutions to a set partitioning formulation of the problem We discuss various branching strategies that allow column generation at any no de in the branchandb ound tree Therefore the algorithm can b e viewed as a branch andprice algorithm that is similar in spirit to the branchandcut algorithms that allowrow generation at any no de of the branchandb ound tree Manyvariations of the basic algorithm have b een implemented using MINTO a Mixed INTeger Optimizer Nemhauser Savelsb ergh and Sigismondi MINTO is a software system that solves mixedinteger linear programs by a branchandb ound algorithm with linear programming relaxations It also provides automatic constraint classication pre pro cessing primal heuristics and constraint generation Moreover the user can enrichthe basic algorithm byproviding a variety of sp ecialized application routines that can customize MINTO to achieve maximum eciency for a problem class This pap er is organized as follows Section intro duces b oth the standard and the set partitioning based formulation for the GAP Section presents the basic branchandprice algorithm and discusses issues related to column generation and branchandb ound Section examines the various branching strategies Section covers various implementation issues and Section describ es the computational exp eriments that have b een conducted Finally Section examines approximation algorithms derived from the branchandprice algorithm Formulations In the GAP the ob jective is to nd a maximum prot assignmentofn jobs to m agents such that each job is assigned to precisely one agent sub ject to capacity restrictions on the agents The standard integer programming formulation is the following X max p x ij ij imj n sub ject to X x j fng ij im X w x c i f mg ij ij i j n x f g i fmgjf ng ij where p ZZ is the prot asso ciated with assigning job j to agent i w ZZ the ij ij claim on the capacity of agent i byjobj if it is assigned to agent i c ZZ the capacity i of agent i and x avariable indicating whether job j is assigned to agent i x ij ij or not x ij The formulation underlying the branchandprice algorithm discussed in this has an exp onential number of variables and can b e viewed as a disaggregated version of the ab ove formulation i i i Let K fx x x g b e the set of all p ossible feasible assignments of jobs to agent i k i i i i i is a feasible solution to x x x i ie x nk k k k X i w x c ij i jk j n i x f g j fng jk i Let y for i f mg and k K b e a binary variable indicating whether a feasible i k i i i assignment x is selected for agent i y or not y The GAP can nowbe k k k formulated as X X i i max p x y ij jk k j n imk k i sub ject to X i i x y j f ng jk k imk k i X i i f mg y k k k i i f g i fmgk K y i k where the rst set of constraints enforces that each job is assigned to precisely one agent and the second set of constraints enforces that at most one feasible assignment is selected for each agent This set partitioning formulation has b een used by Cattrysse Salomon and Van Wassenhove to develop an approximation algorithm for the GAP The knapsack problem asso ciated with agent i in the standard formulation ie X p x max ij ij j n sub ject to X w x c ij ij i j n x f g j f ng ij has b een replaced in the disaggregated formulation by X X i i max p x y ij jk k j n k k i sub ject to X i y k k k i i i where x x are the integral solutions to the knapsack problem Because the linear k i programming relaxation of a knapsack problem contains the convex hull of the integer solutions the LP relaxation of the disaggregated formulation provides a b ound that is at least as tight as the b ound provided by the LP relaxation of the standard formulation Observe that the disaggregated formulation is essentially obtained by applying Dantzig Wolfe decomp osition to the standard formulation where the knapsack constraints have b een placed in the subproblem Consequently the value of the b ound provided by the LP relaxation of the disaggregated formulation is equal to the value of the Lagrangean dual obtained by dualizing the semiassignment constraints ie X X X min max p x x ij ij j ij imj n j n im sub ject to X w x c j f ng ij ij i j n x f g i fmgjf ng ij See for example Nemhauser and Wolsey Section I I for an exp osition of the relation b etween Lagrangean relaxation and DantzigWolfe decomp osition The algorithms of Fisher Jaikumar and Van Wassenhove Guignard and Rosenwein and Karabakal Bean and Lohmann are based on b ounds obtained by solving the ab ove Lagrangean dual Our computational exp eriments will show that the branchandprice algorithm discussed in this pap er although in theory using the same b ounds outp erforms the optimization algorithms of Fisher Jaikumar and Van Wassenhove Guignard and Rosenwein and Karabakal Bean and Lohmann A plausible explanation for this phenomenon is the fact that the use of the simplex metho d provides much b etter convergence prop erties than the use of subgradient and dual ascent metho ds for the solution of the Lagrangean dual Let y be any feasible solution to the LPrelaxation of the disaggregated formulation and P i i let z x y then z constitutes a feasible solution to the LPrelaxation of the ij k k jk k i standard formulation Furthermore wehave the following i Prop osition If y is fractional then there must b e a j such that z is fractional ij k i Pro of Supp ose there is no job j such that z is fractional Let F fk K j y g ij i k b e the set of fractional variables asso ciated with agent iWemay assume that jF j P i i i Note is fractional for every j with x y x b ecause if F fpg then z ij k k jp k jk i P i y Therefore that the convexity constraint asso ciated with agent i implies that k F k P P P i i i i i is either or for y forj n Consequently x y y x k F k F k F k jk k k jk P P i i i i i i j nIf y then x forall k F if x y then x x k F k F jk k jk jk k jk for all k F But that means that wehave duplicate columns a contradiction Branchandprice algorithms Column generation is a pricing scheme for solving largescale linear programs LPs Instead of pricing out nonbasic variables byenumeration in a column generation approach the most negative or p ositive reduced price is found by solving an optimization problem Gilmore and Gomory intro duced the column generation approach in the context of cutting sto ck problems In their case as in many other cases the linear program is a relaxation of an integer program IP However when an LP relaxation is solved by column generation the solution is not necessarily integral and it is not clear how to obtain an optimal or even feasible integer solution to the IP since standard branchandb ound techniques can interfere with the column generation algorithm Recentlyvarious researchers have started to develop customized branching strategies to handle these diculties eg Desro chers Desrosiers and Solomon for vehicle routing problems Desro chers and Soumis and Anbil Tanga and Johnson for crew scheduling problems and Vance Barnhart Johnson and Nemhauser for cutting sto ck problems Consider the linear programming relaxation of the disaggregated formulation for the GAP This master problem cannot b e solved directly due to the exp onential number of columns However a restricted master problem that considers only a subset of the columns can b e solved directly using for instance the simplex metho d Additional columns for the restricted master problem can b e generated as needed by solving the pricing problem max fz KP v g i i im where v is the optimal dual price from the solution to the restricted master problem as i so ciated with the convexity constraintofagent i and z KP isthevalue of the optimal i solution to the following knapsack problem X i max p u x ij j j j n sub ject to X i w x c ij i j j n i x f g j f ng j with u b eing the optimal dual
Recommended publications
  • A Branch-And-Price Approach with Milp Formulation to Modularity Density Maximization on Graphs
    A BRANCH-AND-PRICE APPROACH WITH MILP FORMULATION TO MODULARITY DENSITY MAXIMIZATION ON GRAPHS KEISUKE SATO Signalling and Transport Information Technology Division, Railway Technical Research Institute. 2-8-38 Hikari-cho, Kokubunji-shi, Tokyo 185-8540, Japan YOICHI IZUNAGA Information Systems Research Division, The Institute of Behavioral Sciences. 2-9 Ichigayahonmura-cho, Shinjyuku-ku, Tokyo 162-0845, Japan Abstract. For clustering of an undirected graph, this paper presents an exact algorithm for the maximization of modularity density, a more complicated criterion to overcome drawbacks of the well-known modularity. The problem can be interpreted as the set-partitioning problem, which reminds us of its integer linear programming (ILP) formulation. We provide a branch-and-price framework for solving this ILP, or column generation combined with branch-and-bound. Above all, we formulate the column gen- eration subproblem to be solved repeatedly as a simpler mixed integer linear programming (MILP) problem. Acceleration tech- niques called the set-packing relaxation and the multiple-cutting- planes-at-a-time combined with the MILP formulation enable us to optimize the modularity density for famous test instances in- cluding ones with over 100 vertices in around four minutes by a PC. Our solution method is deterministic and the computation time is not affected by any stochastic behavior. For one of them, column generation at the root node of the branch-and-bound tree arXiv:1705.02961v3 [cs.SI] 27 Jun 2017 provides a fractional upper bound solution and our algorithm finds an integral optimal solution after branching. E-mail addresses: (Keisuke Sato) [email protected], (Yoichi Izunaga) [email protected].
    [Show full text]
  • Chapter 9. Coping with NP-Completeness
    Chapter 9 Coping with NP-completeness You are the junior member of a seasoned project team. Your current task is to write code for solving a simple-looking problem involving graphs and numbers. What are you supposed to do? If you are very lucky, your problem will be among the half-dozen problems concerning graphs with weights (shortest path, minimum spanning tree, maximum flow, etc.), that we have solved in this book. Even if this is the case, recognizing such a problem in its natural habitat—grungy and obscured by reality and context—requires practice and skill. It is more likely that you will need to reduce your problem to one of these lucky ones—or to solve it using dynamic programming or linear programming. But chances are that nothing like this will happen. The world of search problems is a bleak landscape. There are a few spots of light—brilliant algorithmic ideas—each illuminating a small area around it (the problems that reduce to it; two of these areas, linear and dynamic programming, are in fact decently large). But the remaining vast expanse is pitch dark: NP- complete. What are you to do? You can start by proving that your problem is actually NP-complete. Often a proof by generalization (recall the discussion on page 270 and Exercise 8.10) is all that you need; and sometimes a simple reduction from 3SAT or ZOE is not too difficult to find. This sounds like a theoretical exercise, but, if carried out successfully, it does bring some tangible rewards: now your status in the team has been elevated, you are no longer the kid who can't do, and you have become the noble knight with the impossible quest.
    [Show full text]
  • CS 561, Lecture 24 Outline All-Pairs Shortest Paths Example
    Outline CS 561, Lecture 24 • All Pairs Shortest Paths Jared Saia • TSP Approximation Algorithm University of New Mexico 1 All-Pairs Shortest Paths Example • For the single-source shortest paths problem, we wanted to find the shortest path from a source vertex s to all the other vertices in the graph • We will now generalize this problem further to that of finding • For any vertex v, we have dist(v, v) = 0 and pred(v, v) = the shortest path from every possible source to every possible NULL destination • If the shortest path from u to v is only one edge long, then • In particular, for every pair of vertices u and v, we need to dist(u, v) = w(u → v) and pred(u, v) = u compute the following information: • If there’s no shortest path from u to v, then dist(u, v) = ∞ – dist(u, v) is the length of the shortest path (if any) from and pred(u, v) = NULL u to v – pred(u, v) is the second-to-last vertex (if any) on the short- est path (if any) from u to v 2 3 APSP Lots of Single Sources • The output of our shortest path algorithm will be a pair of |V | × |V | arrays encoding all |V |2 distances and predecessors. • Many maps contain such a distance matric - to find the • Most obvious solution to APSP is to just run SSSP algorithm distance from (say) Albuquerque to (say) Ruidoso, you look |V | times, once for every possible source vertex in the row labeled “Albuquerque” and the column labeled • Specifically, to fill in the subarray dist(s, ∗), we invoke either “Ruidoso” Dijkstra’s or Bellman-Ford starting at the source vertex s • In this class, we’ll focus
    [Show full text]
  • Optimal Placement by Branch-And-Price
    Optimal Placement by Branch-and-Price Pradeep Ramachandaran1 Ameya R. Agnihotri2 Satoshi Ono2;3;4 Purushothaman Damodaran1 Krishnaswami Srihari1 Patrick H. Madden2;4 SUNY Binghamton SSIE1 and CSD2 FAIS3 University of Kitakyushu4 Abstract— Circuit placement has a large impact on all aspects groups of up to 36 elements. The B&P approach is based of performance; speed, power consumption, reliability, and cost on column generation techniques and B&B. B&P has been are all affected by the physical locations of interconnected applied to solve large instances of well known NP-Complete transistors. The placement problem is NP-Complete for even simple metrics. problems such as the Vehicle Routing Problem [7]. In this paper, we apply techniques developed by the Operations We have tested our approach on benchmarks with known Research (OR) community to the placement problem. Using optimal configurations, and also on problems extracted from an Integer Programming (IP) formulation and by applying a the “final” placements of a number of recent tools (Feng Shui “branch-and-price” approach, we are able to optimally solve 2.0, Dragon 3.01, and mPL 3.0). We find that suboptimality is placement problems that are an order of magnitude larger than those addressed by traditional methods. Our results show that rampant: for optimization windows of nine elements, roughly suboptimality is rampant on the small scale, and that there is half of the test cases are suboptimal. As we scale towards merit in increasing the size of optimization windows used in detail windows with thirtysix elements, we find that roughly 85% of placement.
    [Show full text]
  • Approximability Preserving Reduction Giorgio Ausiello, Vangelis Paschos
    Approximability preserving reduction Giorgio Ausiello, Vangelis Paschos To cite this version: Giorgio Ausiello, Vangelis Paschos. Approximability preserving reduction. 2005. hal-00958028 HAL Id: hal-00958028 https://hal.archives-ouvertes.fr/hal-00958028 Preprint submitted on 11 Mar 2014 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Laboratoire d'Analyse et Modélisation de Systèmes pour l'Aide à la Décision CNRS UMR 7024 CAHIER DU LAMSADE 227 Septembre 2005 Approximability preserving reductions Giorgio AUSIELLO, Vangelis Th. PASCHOS Approximability preserving reductions∗ Giorgio Ausiello1 Vangelis Th. Paschos2 1 Dipartimento di Informatica e Sistemistica Università degli Studi di Roma “La Sapienza” [email protected] 2 LAMSADE, CNRS UMR 7024 and Université Paris-Dauphine [email protected] September 15, 2005 Abstract We present in this paper a “tour d’horizon” of the most important approximation-pre- serving reductions that have strongly influenced research about structure in approximability classes. 1 Introduction The technique of transforming a problem into another in such a way that the solution of the latter entails, somehow, the solution of the former, is a classical mathematical technique that has found wide application in computer science since the seminal works of Cook [10] and Karp [19] who introduced particular kinds of transformations (called reductions) with the aim of study- ing the computational complexity of combinatorial decision problems.
    [Show full text]
  • Admm for Multiaffine Constrained Optimization Wenbo Gao†, Donald
    ADMM FOR MULTIAFFINE CONSTRAINED OPTIMIZATION WENBO GAOy, DONALD GOLDFARBy, AND FRANK E. CURTISz Abstract. We propose an expansion of the scope of the alternating direction method of multipliers (ADMM). Specifically, we show that ADMM, when employed to solve problems with multiaffine constraints that satisfy certain easily verifiable assumptions, converges to the set of constrained stationary points if the penalty parameter in the augmented Lagrangian is sufficiently large. When the Kurdyka-Lojasiewicz (K-L)property holds, this is strengthened to convergence to a single con- strained stationary point. Our analysis applies under assumptions that we have endeavored to make as weak as possible. It applies to problems that involve nonconvex and/or nonsmooth objective terms, in addition to the multiaffine constraints that can involve multiple (three or more) blocks of variables. To illustrate the applicability of our results, we describe examples including nonnega- tive matrix factorization, sparse learning, risk parity portfolio selection, nonconvex formulations of convex problems, and neural network training. In each case, our ADMM approach encounters only subproblems that have closed-form solutions. 1. Introduction The alternating direction method of multipliers (ADMM) is an iterative method that was initially proposed for solving linearly-constrained separable optimization problems having the form: ( inf f(x) + g(y) (P 0) x;y Ax + By − b = 0: The augmented Lagrangian L of the problem (P 0), for some penalty parameter ρ > 0, is defined to be ρ L(x; y; w) = f(x) + g(y) + hw; Ax + By − bi + kAx + By − bk2: 2 In iteration k, with the iterate (xk; yk; wk), ADMM takes the following steps: (1) Minimize L(x; yk; wk) with respect to x to obtain xk+1.
    [Show full text]
  • Heuristic Search Viewed As Path Finding in a Graph
    ARTIFICIAL INTELLIGENCE 193 Heuristic Search Viewed as Path Finding in a Graph Ira Pohl IBM Thomas J. Watson Research Center, Yorktown Heights, New York Recommended by E. J. Sandewall ABSTRACT This paper presents a particular model of heuristic search as a path-finding problem in a directed graph. A class of graph-searching procedures is described which uses a heuristic function to guide search. Heuristic functions are estimates of the number of edges that remain to be traversed in reaching a goal node. A number of theoretical results for this model, and the intuition for these results, are presented. They relate the e])~ciency of search to the accuracy of the heuristic function. The results also explore efficiency as a consequence of the reliance or weight placed on the heuristics used. I. Introduction Heuristic search has been one of the important ideas to grow out of artificial intelligence research. It is an ill-defined concept, and has been used as an umbrella for many computational techniques which are hard to classify or analyze. This is beneficial in that it leaves the imagination unfettered to try any technique that works on a complex problem. However, leaving the con. cept vague has meant that the same ideas are rediscovered, often cloaked in other terminology rather than abstracting their essence and understanding the procedure more deeply. Often, analytical results lead to more emcient procedures. Such has been the case in sorting [I] and matrix multiplication [2], and the same is hoped for this development of heuristic search. This paper attempts to present an overview of recent developments in formally characterizing heuristic search.
    [Show full text]
  • Approximation Algorithms
    Lecture 21 Approximation Algorithms 21.1 Overview Suppose we are given an NP-complete problem to solve. Even though (assuming P = NP) we 6 can’t hope for a polynomial-time algorithm that always gets the best solution, can we develop polynomial-time algorithms that always produce a “pretty good” solution? In this lecture we consider such approximation algorithms, for several important problems. Specific topics in this lecture include: 2-approximation for vertex cover via greedy matchings. • 2-approximation for vertex cover via LP rounding. • Greedy O(log n) approximation for set-cover. • Approximation algorithms for MAX-SAT. • 21.2 Introduction Suppose we are given a problem for which (perhaps because it is NP-complete) we can’t hope for a fast algorithm that always gets the best solution. Can we hope for a fast algorithm that guarantees to get at least a “pretty good” solution? E.g., can we guarantee to find a solution that’s within 10% of optimal? If not that, then how about within a factor of 2 of optimal? Or, anything non-trivial? As seen in the last two lectures, the class of NP-complete problems are all equivalent in the sense that a polynomial-time algorithm to solve any one of them would imply a polynomial-time algorithm to solve all of them (and, moreover, to solve any problem in NP). However, the difficulty of getting a good approximation to these problems varies quite a bit. In this lecture we will examine several important NP-complete problems and look at to what extent we can guarantee to get approximately optimal solutions, and by what algorithms.
    [Show full text]
  • Branch-And-Bound Experiments in Convex Nonlinear Integer Programming
    Noname manuscript No. (will be inserted by the editor) More Branch-and-Bound Experiments in Convex Nonlinear Integer Programming Pierre Bonami · Jon Lee · Sven Leyffer · Andreas W¨achter September 29, 2011 Abstract Branch-and-Bound (B&B) is perhaps the most fundamental algorithm for the global solution of convex Mixed-Integer Nonlinear Programming (MINLP) prob- lems. It is well-known that carrying out branching in a non-simplistic manner can greatly enhance the practicality of B&B in the context of Mixed-Integer Linear Pro- gramming (MILP). No detailed study of branching has heretofore been carried out for MINLP, In this paper, we study and identify useful sophisticated branching methods for MINLP. 1 Introduction Branch-and-Bound (B&B) was proposed by Land and Doig [26] as a solution method for MILP (Mixed-Integer Linear Programming) problems, though the term was actually coined by Little et al. [32], shortly thereafter. Early work was summarized in [27]. Dakin [14] modified the branching to how we commonly know it now and proposed its extension to convex MINLPs (Mixed-Integer Nonlinear Programming problems); that is, MINLP problems for which the continuous relaxation is a convex program. Though a very useful backbone for ever-more-sophisticated algorithms (e.g., Branch- and-Cut, Branch-and-Price, etc.), the basic B&B algorithm is very elementary. How- Pierre Bonami LIF, Universit´ede Marseille, 163 Av de Luminy, 13288 Marseille, France E-mail: [email protected] Jon Lee Department of Industrial and Operations Engineering, University
    [Show full text]
  • Cooperative and Adaptive Algorithms Lecture 6 Allaa (Ella) Hilal, Spring 2017 May, 2017 1 Minute Quiz (Ungraded)
    Cooperative and Adaptive Algorithms Lecture 6 Allaa (Ella) Hilal, Spring 2017 May, 2017 1 Minute Quiz (Ungraded) • Select if these statement are true (T) or false (F): Statement T/F Reason Uniform-cost search is a special case of Breadth- first search Breadth-first search, depth- first search and uniform- cost search are special cases of best- first search. A* is a special case of uniform-cost search. ECE457A, Dr. Allaa Hilal, Spring 2017 2 1 Minute Quiz (Ungraded) • Select if these statement are true (T) or false (F): Statement T/F Reason Uniform-cost search is a special case of Breadth- first F • Breadth- first search is a special case of Uniform- search cost search when all step costs are equal. Breadth-first search, depth- first search and uniform- T • Breadth-first search is best-first search with f(n) = cost search are special cases of best- first search. depth(n); • depth-first search is best-first search with f(n) = - depth(n); • uniform-cost search is best-first search with • f(n) = g(n). A* is a special case of uniform-cost search. F • Uniform-cost search is A* search with h(n) = 0. ECE457A, Dr. Allaa Hilal, Spring 2017 3 Informed Search Strategies Hill Climbing Search ECE457A, Dr. Allaa Hilal, Spring 2017 4 Hill Climbing Search • Tries to improve the efficiency of depth-first. • Informed depth-first algorithm. • An iterative algorithm that starts with an arbitrary solution to a problem, then attempts to find a better solution by incrementally changing a single element of the solution. • It sorts the successors of a node (according to their heuristic values) before adding them to the list to be expanded.
    [Show full text]
  • Branch and Price for Chance Constrained Bin Packing
    Branch and Price for Chance Constrained Bin Packing Zheng Zhang Department of Industrial Engineering and Management, Shanghai Jiao Tong University, Shanghai 200240, China, [email protected] Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI 48109, [email protected] Brian Denton Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI 48109, [email protected] Xiaolan Xie Department of Industrial Engineering and Management, Shanghai Jiao Tong University, Shanghai 200240, China, [email protected] Center for Biomedical and Healthcare Engineering, Ecole Nationale Superi´euredes Mines, Saint Etienne 42023, France, [email protected] This article considers two versions of the stochastic bin packing problem with chance constraints. In the first version, we formulate the problem as a two-stage stochastic integer program that considers item-to- bin allocation decisions in the context of chance constraints on total item size within the bins. Next, we describe a distributionally robust formulation of the problem that assumes the item sizes are ambiguous. We present a branch-and-price approach based on a column generation reformulation that is tailored to these two model formulations. We further enhance this approach using antisymmetry branching and subproblem reformulations of the stochastic integer programming model based on conditional value at risk (CVaR) and probabilistic covers. For the distributionally robust formulation we derive a closed form expression for the chance constraints; furthermore, we show that under mild assumptions the robust model can be reformulated as a mixed integer program with significantly fewer integer variables compared to the stochastic integer program. We implement a series of numerical experiments based on real data in the context of an application to surgery scheduling in hospitals to demonstrate the performance of the methods for practical applications.
    [Show full text]
  • Backtracking / Branch-And-Bound
    Backtracking / Branch-and-Bound Optimisation problems are problems that have several valid solutions; the challenge is to find an optimal solution. How optimal is defined, depends on the particular problem. Examples of optimisation problems are: Traveling Salesman Problem (TSP). We are given a set of n cities, with the distances between all cities. A traveling salesman, who is currently staying in one of the cities, wants to visit all other cities and then return to his starting point, and he is wondering how to do this. Any tour of all cities would be a valid solution to his problem, but our traveling salesman does not want to waste time: he wants to find a tour that visits all cities and has the smallest possible length of all such tours. So in this case, optimal means: having the smallest possible length. 1-Dimensional Clustering. We are given a sorted list x1; : : : ; xn of n numbers, and an integer k between 1 and n. The problem is to divide the numbers into k subsets of consecutive numbers (clusters) in the best possible way. A valid solution is now a division into k clusters, and an optimal solution is one that has the nicest clusters. We will define this problem more precisely later. Set Partition. We are given a set V of n objects, each having a certain cost, and we want to divide these objects among two people in the fairest possible way. In other words, we are looking for a subdivision of V into two subsets V1 and V2 such that X X cost(v) − cost(v) v2V1 v2V2 is as small as possible.
    [Show full text]