Combinatorial Optimization Problems Heuristic Algorithms

Total Page:16

File Type:pdf, Size:1020Kb

Combinatorial Optimization Problems Heuristic Algorithms Combinatorial optimization Combinatorial optimization problems Heuristic Algorithms Giovanni Righini University of Milan Department of Computer Science (Crema) Combinatorial optimization Optimization In general an optimization problem can be formulated as: minimize z(x) subject to x ∈ X where • x is a vector of variables; • z(x) is the objective function; • X is the feasible region, i.e. the set of solutions satisfying the constraints. A solution is an assignment of values to the variables. Combinatorial optimization Combinatorial optimization In general a combinatorial optimization problem can be formulated as: minimize z(x) subject to x ∈{0, 1}|E| The feasible region X is defined as a subset of the set of all possible subsets of a given ground set (!). Let’s say it again with an example... Combinatorial optimization Combinatorial optimization: example Ground set E: the set of edges of a given graph G. All possible subsets of E are the 2|E| subsets of edges. Only a subset X of them are, for instance, spanning trees. The condition x ∈ X describes the feasible region of any problem involving the search of an optimal spanning tree. Although the ground set E is rather small, the number of its subsets is exponential in its cardinality (2|E|) and hence the number of solutions can be very large. Even restricting the search to feasible solutions, the cardinality of X can be combinatorial: it grows as a combinatorial number when |E| grows. Combinatorial optimization The combinatorial problems structure The variables, the constraints and the objective function of a combinatorial optimization problem define its “combinatorial structure”. This is a semi-informal way to indicate the main characteristics of the problem, that affect the effectiveness of different solution procedures. The analysis of the combinatorial structure of any given problem gives useful indications on the most suitable algorithm to solve it. Moreover it can uncover similarities between seemingly different problems. Combinatorial optimization Problems on weighted sets: the knapsack problem (KP) From a ground set of items, select a subset such that the value of the selected items is maximum and their weight does not exceed a given capacity. We are given • a ground set N of items • a weight function a : N → N • a capacity b ∈ N • a value function c : N → N We can associate a binary variable x with each element of the ground set: the solution space is {0, 1}n, where n = |N|. In this way every solution corresponds to a subset and to its binary characteristic vector x. The feasible region X contains the subsets with total weight not larger than b: n X = {x ∈{0, 1} : X aj xj ≤ b} j∈N The objective is the maximization of the total value: max z(x)= X cj xj . j∈N Combinatorial optimization Example D N ABCDEF F E c 724541 A a 532311 B b 8 Knapsack = C x ′ = (0, 0, 1, 1, 1, 0, 0) ∈ X x ′′ = (1, 0, 1, 1, 0, 0, 0) 6∈ X z(x ′)= 13 z(x ′′)= 16 D C E D A Knapsack Knapsack C Combinatorial optimization Problems on subsets with a metric: the Max Diversity Problem (MDP) Given a ground set of items, we want to select a subset of given cardinality, maximizing a measure of the pairwise distances between the selected items. We are given • a ground set N, • a distance function d : N × N → N, • a positive integer number k ∈{1,..., |N|}. A possible choice of the variables is analogous to the previous case: x is the binary characteristic vector of the selected subset. The feasible region X contains all the subsets of cardinality k: n X = {x ∈{0, 1} : X xj = k}. j∈N The objective is to maximize the sum of the pairwise distances: max z(x)= X dij xi xj . i,j∈N It is a quadratic function. Combinatorial optimization Example C B E D A G F x ′ = (0, 0, 1, 1, 1, 0, 0) ∈ X x ′′ = (1, 0, 1, 0, 0, 0, 1) ∈ X z(x ′)= 24 z(x ′′)= 46 C C B B E E D D A A G G F F Combinatorial optimization Additive and non-additive objective functions In general the objective function associates rational or integer values with (feasible) subsets of the ground set. z : X → N Computing its value can be more or less difficult. • The KP has an additive (linear) objective function: its value is the sum of the values of another value function c whose domain is the ground set: c : N → N • The MDP has a non-additive (quadratic) objective-function. Both of them are easy to compute, but the additive objective function of the KP is easier to update when an item is inserted or deleted from the solution. It is enough to • add cj for each inserted item j ∈ N; • subtract cj for each deleted item j ∈ N. For the non-additive objective function of the MDP this is not true. Combinatorial optimization Set partitioning problems: the Bin Packing Problem (BPP) A set of weighted items must be partitioned into the minimum number of subsets, so that the total weight of each subset is within a given capacity. We are given: • a set N of items, • a set M of bins, • a weight function a : N → N, • a capacity b of the bins. The ground set of the problem contains all the item-bin pairs. E = N × M A solution is represented by nm binary variables with two indices i ∈ N and j ∈ M. The feasible region contains the partitions of the items, complying with the capacity constraints: nm X = {x ∈{0, 1} : X xij = 1 ∀i ∈ N, X ai xij ≤ b ∀j ∈ M}. j∈M i∈N Combinatorial optimization Set partitioning problems: the Bin Packing Problem (BPP) The objective is to minimize the number of bins used. min z(x)= |{j ∈ M : X xij > 0}|. i∈N To get rid of the function “cardinality of”, we need additional binary variables: the characteristic vector y of the bin subset. We obtain: nm+m X = {(x, y) ∈{0, 1} : X xij = 1 ∀i ∈ N, X ai xij ≤ byj ∀j ∈ M}. j∈M i∈N min z(y)= X yj . j∈M Combinatorial optimization Example ′ S = (A, 1) , (B, 1) , (C, 2) , (D, 2) , (E, 2) , (F, 3) , (G, 4) , (H, 5) , (I, 5) ∈ X y ′ = (1, 1, 1, 1, 1) z(y ′)= 5 ′′ S = (A, 1) , (B, 1) , (C, 2) , (D, 2) , (E, 2) , (F, 3) , (G, 4) , (H, 1) , (I, 4) 6∈ X y ′′ = (1, 1, 1, 1, 0) z(y ′′)= 4 Combinatorial optimization Set partitioning problems: the Parallel Machine Scheduling Problem (PMSP) A set of indivisible jobs of given duration must be assigned to a set of machines, minimizing the overall completion time. We are given: • a set N of jobs, • a set M of machines, • a processing time function p : N → N. The ground set contains all job-machine pairs. We can use the same variable choice as for the BPP. The feasible region X contains the partitions of N into subsets. nm X = {x ∈{0, 1} : X xij = 1 ∀i ∈ N.} j∈M The objective function is the minimization of the maximum working time among all machines. min z(x)= max X pi xij . j∈M i∈N Combinatorial optimization Example N = {L1, L2, L3, L4, L5, L6} M = {M1, M2, M3} Job L1 L2 L3 L4 L5 L6 p 80 40 20 30 15 80 M1 L1 L5 ′ S = (L1, M1) , (L2, M2) , (L3, M2) , M2 L2 L3 L4 (L4, M2) , (L5, M1) ∈ X ′ M3 L6 z(x )= 95 95 M1 L1 L2 ′′ S = (L1, M1) , (L2, M1) , (L3, M2) , M2 L3 L4 L5 (L4, M2) , (L5, M2) ∈ X ′′ M3 L6 z(x )= 120 120 Combinatorial optimization Sensitive and insensitive objective functions The objective functions of the BPP and the PMSP • are not additive, • are not easy to compute. Small changes in the solution x may have different impact on the objective function value: • variation equal to the duration of the changed job (e.g. L5 on M1); • no variation (e.g., L5 on M3); • intermediate variation (e.g., L2 on M2). This is because the effect of the change depends both • on the modified elements, • on the non-modified elements. In both problems the objective function is “flat”: many different feasible solutions have the same value. Combinatorial optimization Problems on matrices: the Set Covering Problem (SCP) Given a binary matrix and a vector of costs associated with the columns, select a minimum cost subset of columns covering all the rows. We are given: • a binary matrix a ∈ Bm,n with a set R or m rows and a set C of n columns, • a cost function c : C → N. A column j ∈ C covers a row i ∈ R if and only if aij = 1. The ground set is the set of columns C. The feasible region contains the subsets of columns that cover all the rows. n X = {x ∈{0, 1} : X aij xj ≥ 1 ∀i ∈ R}. j∈C The objective is to minimize the total cost of the selected columns: min z(x)= X cj xj . j∈C Combinatorial optimization Example c 4 6 10 14 5 6 011 110 001 100 a 110 001 000 111 111 010 0 1 1 1 1 0 2 0 0 1 1 0 0 1 x ′ = (1, 0, 1, 0, 1, 0) ∈ X a 1 1 0 0 0 1 1 0 0 0 1 1 1 1 z(x ′)= 19 1 1 1 0 1 0 3 0 1 1 1 1 0 1 0 0 1 1 0 0 0 x ′′ = (1, 0, 0, 0, 1, 1) 6∈ X a 1 1 0 0 0 1 2 0 0 0 1 1 1 2 z(x ′′)= 15 1 1 1 0 1 0 2 Combinatorial optimization The feasibility test In a heuristic algorithm the following sub-problem may often occur: Given a solution x, is it feasible or not? x ∈ X? This is a decision problem.
Recommended publications
  • 4. Convex Optimization Problems
    Convex Optimization — Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form • convex optimization problems • quasiconvex optimization • linear optimization • quadratic optimization • geometric programming • generalized inequality constraints • semidefinite programming • vector optimization • 4–1 Optimization problem in standard form minimize f0(x) subject to f (x) 0, i =1,...,m i ≤ hi(x)=0, i =1,...,p x Rn is the optimization variable • ∈ f : Rn R is the objective or cost function • 0 → f : Rn R, i =1,...,m, are the inequality constraint functions • i → h : Rn R are the equality constraint functions • i → optimal value: p⋆ = inf f (x) f (x) 0, i =1,...,m, h (x)=0, i =1,...,p { 0 | i ≤ i } p⋆ = if problem is infeasible (no x satisfies the constraints) • ∞ p⋆ = if problem is unbounded below • −∞ Convex optimization problems 4–2 Optimal and locally optimal points x is feasible if x dom f and it satisfies the constraints ∈ 0 ⋆ a feasible x is optimal if f0(x)= p ; Xopt is the set of optimal points x is locally optimal if there is an R> 0 such that x is optimal for minimize (over z) f0(z) subject to fi(z) 0, i =1,...,m, hi(z)=0, i =1,...,p z x≤ R k − k2 ≤ examples (with n =1, m = p =0) f (x)=1/x, dom f = R : p⋆ =0, no optimal point • 0 0 ++ f (x)= log x, dom f = R : p⋆ = • 0 − 0 ++ −∞ f (x)= x log x, dom f = R : p⋆ = 1/e, x =1/e is optimal • 0 0 ++ − f (x)= x3 3x, p⋆ = , local optimum at x =1 • 0 − −∞ Convex optimization problems 4–3 Implicit constraints the standard form optimization problem has an implicit
    [Show full text]
  • Metaheuristics1
    METAHEURISTICS1 Kenneth Sörensen University of Antwerp, Belgium Fred Glover University of Colorado and OptTek Systems, Inc., USA 1 Definition A metaheuristic is a high-level problem-independent algorithmic framework that provides a set of guidelines or strategies to develop heuristic optimization algorithms (Sörensen and Glover, To appear). Notable examples of metaheuristics include genetic/evolutionary algorithms, tabu search, simulated annealing, and ant colony optimization, although many more exist. A problem-specific implementation of a heuristic optimization algorithm according to the guidelines expressed in a metaheuristic framework is also referred to as a metaheuristic. The term was coined by Glover (1986) and combines the Greek prefix meta- (metá, beyond in the sense of high-level) with heuristic (from the Greek heuriskein or euriskein, to search). Metaheuristic algorithms, i.e., optimization methods designed according to the strategies laid out in a metaheuristic framework, are — as the name suggests — always heuristic in nature. This fact distinguishes them from exact methods, that do come with a proof that the optimal solution will be found in a finite (although often prohibitively large) amount of time. Metaheuristics are therefore developed specifically to find a solution that is “good enough” in a computing time that is “small enough”. As a result, they are not subject to combinatorial explosion – the phenomenon where the computing time required to find the optimal solution of NP- hard problems increases as an exponential function of the problem size. Metaheuristics have been demonstrated by the scientific community to be a viable, and often superior, alternative to more traditional (exact) methods of mixed- integer optimization such as branch and bound and dynamic programming.
    [Show full text]
  • IEOR 269, Spring 2010 Integer Programming and Combinatorial Optimization
    IEOR 269, Spring 2010 Integer Programming and Combinatorial Optimization Professor Dorit S. Hochbaum Contents 1 Introduction 1 2 Formulation of some ILP 2 2.1 0-1 knapsack problem . 2 2.2 Assignment problem . 2 3 Non-linear Objective functions 4 3.1 Production problem with set-up costs . 4 3.2 Piecewise linear cost function . 5 3.3 Piecewise linear convex cost function . 6 3.4 Disjunctive constraints . 7 4 Some famous combinatorial problems 7 4.1 Max clique problem . 7 4.2 SAT (satisfiability) . 7 4.3 Vertex cover problem . 7 5 General optimization 8 6 Neighborhood 8 6.1 Exact neighborhood . 8 7 Complexity of algorithms 9 7.1 Finding the maximum element . 9 7.2 0-1 knapsack . 9 7.3 Linear systems . 10 7.4 Linear Programming . 11 8 Some interesting IP formulations 12 8.1 The fixed cost plant location problem . 12 8.2 Minimum/maximum spanning tree (MST) . 12 9 The Minimum Spanning Tree (MST) Problem 13 i IEOR269 notes, Prof. Hochbaum, 2010 ii 10 General Matching Problem 14 10.1 Maximum Matching Problem in Bipartite Graphs . 14 10.2 Maximum Matching Problem in Non-Bipartite Graphs . 15 10.3 Constraint Matrix Analysis for Matching Problems . 16 11 Traveling Salesperson Problem (TSP) 17 11.1 IP Formulation for TSP . 17 12 Discussion of LP-Formulation for MST 18 13 Branch-and-Bound 20 13.1 The Branch-and-Bound technique . 20 13.2 Other Branch-and-Bound techniques . 22 14 Basic graph definitions 23 15 Complexity analysis 24 15.1 Measuring quality of an algorithm .
    [Show full text]
  • Introduction to Integer Programming – Integer Programming Models
    15.053/8 March 14, 2013 Introduction to Integer Programming – Integer programming models 1 Quotes of the Day “Somebody who thinks logically is a nice contrast to the real world.” -- The Law of Thumb “Take some more tea,” the March Hare said to Alice, very earnestly. “I’ve had nothing yet,” Alice replied in an offended tone, “so I can’t take more.” “You mean you can’t take less,” said the Hatter. “It’s very easy to take more than nothing.” -- Lewis Carroll in Alice in Wonderland 2 Combinatorial optimization problems INPUT: A description of the data for an instance of the problem FEASIBLE SOLUTIONS: there is a way of determining from the input whether a given solution x’ (assignment of values to decision variables) is feasible. Typically in combinatorial optimization problems there is a finite number of possible solutions. OBJECTIVE FUNCTION: For each feasible solution x’ there is an associated objective f(x’). Minimization problem. Find a feasible solution x* that minimizes f( ) among all feasible solutions. 3 Example 1: Traveling Salesman Problem INPUT: a set N of n points in the plane FEASIBLE SOLUTION: a tour that passes through each point exactly once. OBJECTIVE: minimize the length of the tour. 4 Example 2: Balanced Partition INPUT: A set of positive integers a1, …, an FEASIBLE SOLUTION: a partition of {1, 2, … n} into two disjoint sets S and T. – S ∩ T = ∅, S∪T = {1, … , n} OBJECTIVE : minimize | ∑i∈S ai - ∑i∈T ai | Example: 7, 10, 13, 17, 20, 22 These numbers sum to 89 The best split is {10, 13, 22} and {7, 17, 20}.
    [Show full text]
  • Integer Optimization Methods for Machine Learning by Allison an Chang Sc.B
    Integer Optimization Methods for Machine Learning by Allison An Chang Sc.B. Applied Mathematics, Brown University (2007) Submitted to the Sloan School of Management in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Operations Research at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY June 2012 c Massachusetts Institute of Technology 2012. All rights reserved. Author............................................. ................. Sloan School of Management May 18, 2012 Certified by......................................... ................. Dimitris Bertsimas Boeing Leaders for Global Operations Professor Co-Director, Operations Research Center Thesis Supervisor Certified by......................................... ................. Cynthia Rudin Assistant Professor of Statistics Thesis Supervisor Accepted by......................................... ................ Patrick Jaillet Dugald C. Jackson Professor Department of Electrical Engineering and Computer Science Co-Director, Operations Research Center 2 Integer Optimization Methods for Machine Learning by Allison An Chang Submitted to the Sloan School of Management on May 18, 2012, in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Operations Research Abstract In this thesis, we propose new mixed integer optimization (MIO) methods to ad- dress problems in machine learning. The first part develops methods for supervised bipartite ranking, which arises in prioritization tasks in diverse domains such as infor- mation retrieval, recommender systems, natural language processing, bioinformatics, and preventative maintenance. The primary advantage of using MIO for ranking is that it allows for direct optimization of ranking quality measures, as opposed to current state-of-the-art algorithms that use heuristic loss functions. We demonstrate using a number of datasets that our approach can outperform other ranking methods. The second part of the thesis focuses on reverse-engineering ranking models.
    [Show full text]
  • Integer and Combinatorial Optimization
    Integer and Combinatorial Optimization Karla L. Hoffman Systems Engineering and Operations Research Department, School of Information Technology and Engineering, George Mason University, Fairfax, VA 22030 Ted K. Ralphs Department of Industrial and Systems Engineering, Lehigh University, Bethlehem, PA 18015 Technical Report 12T-020 Integer and Combinatorial Optimization Karla L. Hoffman∗1 and Ted K. Ralphsy2 1Systems Engineering and Operations Research Department, School of Information Technology and Engineering, George Mason University, Fairfax, VA 22030 2Department of Industrial and Systems Engineering, Lehigh University, Bethlehem, PA 18015 January 18, 2012 1 Introduction Integer optimization problems are concerned with the efficient allocation of limited resources to meet a desired objective when some of the resources in question can only be divided into discrete parts. In such cases, the divisibility constraints on these resources, which may be people, machines, or other discrete inputs, may restrict the possible alternatives to a finite set. Nevertheless, there are usually too many alternatives to make complete enumeration a viable option for instances of realistic size. For example, an airline may need to determine crew schedules that minimize the total operating cost; an automotive manufacturer may want to determine the optimal mix of models to produce in order to maximize profit; or a flexible manufacturing facility may want to schedule production for a plant without knowing precisely what parts will be needed in future periods. In today's changing and competitive industrial environment, the difference between ad hoc planning methods and those that use sophisticated mathematical models to determine an optimal course of action can determine whether or not a company survives.
    [Show full text]
  • Linear Programming: Geometry, Algebra and the Simplex Method
    ISyE 3133 Lecture Notes ⃝c Shabbir Ahmed Linear Programming: Geometry, Algebra and the Simplex Method A linear programming problem (LP) is an optimization problem where all variables are continuous, the objective is a linear (with respect to the decision variables) function , and the feasible region is defined by a finite number of linear inequalities or equations. LP1 is possibly the best known and most frequently used branch of optimization. Beginning with the seminal work of Dantzig2, LP has witnessed great success in real world applications from such diverse areas as sociology, finance, transportation, manufacturing and medicine. The importance and extensive use of LPs also come from the fact that the solution of certain optimization problems that are not LPs, e.g., integer, stochastic, and nonlinear programming problems, is often carried out by solving a sequence of related linear programs. In this note, we discuss the geometry and algebra of LPs and present the Simplex method. 1.1 Geometry of LP Recall that an LP involves optimizing a linear objective subject to linear constraints, and so can be written in the form f > > ≤ g min c x : ai x bi i = 1; : : : ; m : An LP involving equality constraints can be written in the above form by replacing each equality constraint by two inequality constraints. The feasible region of an LP is of the form f 2 Rn > ≤ g X = x : ai x bi i = 1; : : : ; m which is a polyhedron (recall the definitions/properties of hyperplanes, halfspaces, and polyhedral sets). Thus an LP involves minimizing a linear function over a polyhedral set.
    [Show full text]
  • Data-Driven Satisficing Measure and Ranking
    Data-driven satisficing measure and ranking Wenjie Huang ∗ July 3, 2018 Abstract We propose an computational framework for real-time risk assessment and prioritizing for random outcomes without prior information on probability distributions. The basic model is built based on sat- isficing measure (SM) which yields a single index for risk comparison. Since SM is a dual representation for a family of risk measures, we consider problems constrained by general convex risk measures and specifically by Conditional value-at-risk. Starting from offline optimization, we apply sample average approximation technique and argue the convergence rate and validation of optimal solutions. In online stochastic optimization case, we develop primal-dual stochastic approximation algorithms respectively for general risk constrained problems, and derive their regret bounds. For both offline and online cases, we illustrate the relationship between risk ranking accuracy with sample size (or iterations). Keywords: Risk measure; Satisficing measure; Online stochastic optimization; Stochastic approxima- tion; Sample average approximation; Ranking; 1 Introduction Risk assessment is the process where we identify hazards, analyze or evaluate the risk associated with that hazard, and determine appropriate ways to eliminate or control the hazard. Risk assessment techniques have been widely applied in many area including quantitative financial engineering (Krokhmal et al. 2002), health and environment study (Zhang and Wang 2012; Van Asselt et al. 2013), transportation science (Liu et al. 2017), etc. Paltrinieri et al. (2014) point out that traditional risk assessment methods are often limited by static, one-time processes performed during the design phase of industrial processes. As such they often use older data or generic data on potential hazards and failure rates of equipment and processes and cannot be easily updated in order to take into account new information, giving a more complete view of the related risks.
    [Show full text]
  • Linear & Quadratic Knapsack Optimisation Problem
    linear & quadratic knapsack optimisation problem Supervisor: Prof. Montaz Ali Alex Alochukwu, Ben Levin, Krupa Prag, Micheal Olusanya, Shane Josias, Sakirudeen Abdulsalaam, Vincent Langat January 13, 2018 GROUP 4: Graduate Modelling Camp - MISG 2018 Introduction ∙ The Knapsack Problem is considered to be a combinatorial optimization problem. ∙ The best selection and configuration of a collection of objects adhering to some objective function defines combinatorial optimization problems. 1 Problem Description: Knapsack Problem To determine the number of items to include in a collection such that the total weight is less than or equal to a given limit and the total value is maximised. 2 Problem Description: Knapsack Problem 3 Mathematical Formulation:Linear Knapsack problem Given n-tuples of positive numbers (v1; v2; :::; vn) , (w1; w2; :::; wn) and W > 0. The aim is to determine the subset S of items each with values,vi and wi that Xn Maximize vixi; xi 2 f0; 1g; xi is the decision variable (1) i=1 Xn Subject to: wixi < W; (2) i=1 Xn where W < wi: i=1 4 Quadratic Knapsack Problem ∙ Extension of the linear Knapsack problem. ∙ Additional term in the objective function that describes extra profit gained from choosing a particular combination of items. 5 Mathematical Formulation Xn Xn−1 Xn Maximize cixi + dijxixj (3) i=1 i=1 j=i+1 Xn Subject to: wjxj < W; j = f1; 2; :::; ng; (4) j=1 Xn where x 2 f0; 1g; Max wj ≤ W < wj: j=1 6 Application A Knapsack model serves as an abstract model with broad spectrum applications such as: ∙ Resource allocation problems ∙ Portfolio optimization ∙ Cargo-loading problems ∙ Cutting stock problems 7 Complexity Theory ∙ The abstract measurement of the rate of growth in the required resources as the input n increases, is how we distinguish among the complexity classes.
    [Show full text]
  • Introduction to Mathematical Optimization
    Introduction to Mathematical Optimization • Prerequisites • Information and Vocabulary • Course Outline Course prerequisites • First three units: math content around Algebra 1 level, analytical skills approaching Calculus. Students at the Pre-Calculus level should feel comfortable. Talented students in Algebra 1 can certainly give it a shot. • Last two units: Calculus required – know how to take derivatives and be familiar with their implications for finding maxima and minima. • Computer programming skills will be taught from the ground up. Previous experience is not necessary. Equipment Needed For much of the first unit, a scientific calculator is sufficient, though a graphing calculator will make your life easier. Towards the end of the first unit, when we get into coding, a computer able to download and install software (specifically, the programming language Julia) is necessary. Julia is written for Mac, Windows and Linux systems. So… what is mathematical optimization, anyway? “Optimization” comes from the same root as “optimal”, which means best. When you optimize something, you are “making it best”. So… what is mathematical optimization, anyway? “Optimization” comes from the same root as “optimal”, which means best. When you optimize something, you are “making it best”. But “best” can vary. If you’re a football player, you might want to maximize your running yards, and also minimize your fumbles. Both maximizing and minimizing are types of optimization problems. Mathematical Optimization in the “Real World” Mathematical Optimization is a branch of applied mathematics which is useful in many different fields. Here are a few examples: Mathematical Optimization in the “Real World” Mathematical Optimization is a branch of applied mathematics which is useful in many different fields.
    [Show full text]
  • Optimization Techniques
    Introduction and Basic Concepts (iii) Classification of Optimization Problems 1 D Nagesh Kumar, IISc Optimization Methods: M1L3 Introduction z Optimization problems can be classified based on the type of constraints, nature of design variables, physical structure of the problem, nature of the equations involved, deterministic nature of the variables, permissible value of the design variables, separability of the functions and number of objective functions. These classifications are briefly discussed in this lecture. 2 D Nagesh Kumar, IISc Optimization Methods: M1L3 Classification based on existence of constraints. z Constrained optimization problems: which are subject to one or more constraints. z Unconstrained optimization problems: in which no constraints exist. 3 D Nagesh Kumar, IISc Optimization Methods: M1L3 Classification based on the nature of the design variables z There are two broad categories of classification within this classification z First category : the objective is to find a set of design parameters that make a prescribed function of these parameters minimum or maximum subject to certain constraints. – For example to find the minimum weight design of a strip footing with two loads shown in the figure, subject to a limitation on the maximum settlement of the structure. 4 D Nagesh Kumar, IISc Optimization Methods: M1L3 The problem can be defined as follows subject to the constraints The length of the footing (l) the loads P1 and P2 , the distance between the loads are assumed to be constant and the required optimization is achieved by varying b and d. Such problems are called parameter or static optimization problems. Classification based on the nature of the design variables (contd.) z Second category: the objective is to find a set of design parameters, which are all continuous functions of some other parameter, that minimizes an objective function subject to a set of constraints.
    [Show full text]
  • 0/1–Integer Programming: Optimization and Augmentation Are Equivalent
    0/1–Integer Programming: Optimization and Augmentation are Equivalent Andreas S. Schulz∗ Robert Weismantel† G¨unter M. Ziegler‡ March 1995 Abstract For every fixed set F⊆{0, 1} n the following problems are strongly polynomial time equivalent: given a feasible point x ∈F and a linear objective function c ∈ ZZ n , • find a feasible point x∗ ∈F that maximizes cx (Optimization), • find a feasible point xnew ∈F with cxnew >cx(Augmentation), and • find a feasible point xnew ∈F with cxnew >cxsuch that xnew − x is “irreducible” (Irreducible Augmentation). This generalizes results and techniques that are well known for 0/1–integer programming problems that arise from various classes of combinatorial optimization problems. 1 Introduction For any fixed set F⊆{0, 1}n of feasible 0/1–points, we show that optimization and (irredu- cible) augmentation with respect to linear objective functions are strongly polynomial time equivalent. For example, F can be given as F = {x ∈{0, 1}n : Ax ≤ b} for a fixed matrix m×n m A ∈ ZZ and a fixed vector b ∈ ZZ , but such an explicit representation need not be given for the following. However, we assume throughout that some feasible solution x ∈Fis known in advance. The optimization problem for F is the following task. The Optimization Problem (OPT) n Given a vector c ∈ ZZ , find a vector x∗ ∈F that maximizes cx on F. ∗ Technische Universit¨at Berlin, Fachbereich Mathematik (MA 6–1), Straße des 17. Juni 136, D–10623 Berlin, Germany, [email protected]–berlin.de † Konrad–Zuse–Zentrum f¨ur Informationstechnik Berlin, Heilbronner Straße 10, D–10711 Berlin, Germany, weismantel@zib–berlin.de ‡ Technische Universit¨at Berlin, Fachbereich Mathematik (MA 6–1), Straße des 17.
    [Show full text]