Chapter 5.2 and 5.3 Sara Gestrelius Last Time: Classification

Total Page:16

File Type:pdf, Size:1020Kb

Chapter 5.2 and 5.3 Sara Gestrelius Last Time: Classification Chapter 5.2 and 5.3 Sara Gestrelius Last time: Classification Hybrid metaheuristics Low-level High-level Relay Teamwork Relay Teamwork LRH LTH HRH HTH 5.2 Combining metaheuristics with mathematical programming Mathematical Programming Low-level replay Low-level teamwork High-level relay High-level teamwork Mathematical Programming MP vs MH Enumerative algorithms Relaxation and decomposition methods Cutting Plane and Pricing algorithms Mathematical Programming Why? Mathematical Programming: Exact but expensive in memory and time. Metaheuristics: Not exact but manageable in memory and time. (mainly if your problem is discrete and/or have nonlinear hard constraints/objective function) Mathematical Programming Enumerative algorithms Branch and bound: 1. Branch: Divide problem e.g. x=1 x=0 if x binary. 2. Bound: Keep track of upper bound (i.e. best solution found so far) and also lower bound for all nodes. 3. Prune: If lower bound > upper bound, don’t investigate that node further. Mathematical Programming Enumerative algorithms Branch and bound: 50 130 134 130 Mathematical Programming Enumerative algorithms Branch and bound: 50 130 134 130 143 130 Mathematical Programming Enumerative algorithms Branch and bound: 50 130 134 130 143 130 Mathematical Programming Enumerative algorithms Branch and bound: 50 130 134 130 143 130 140 143 Mathematical Programming Enumerative algorithms Dynamic Programming: Bellman’s Principle is a necessary condition: “The sub-policy of an optimal policy is in itself optimal with regard to the start and end states” Mathematical Programming Enumerative algorithms Dynamic Programming: 1. Divide the problem into stages N. For each stage there should be a number of states x. 2. Define the cost for the initial states in the initial stage. 3. Define a recursive relation for a state at stage k given states of previous stages. Mathematical Programming Enumerative algorithms Dynamic Programming: Example: The knapsack problem. 1. Stages = items, states = how much weight capacity to use at this and all preceding stages, i.e. for this an all preceding items (0,1,2,3,4,5). 2. Cost of initial state, 3. Recursive relation: Item Weight Utility (u) (w) 1 2 65 2 3 80 3 1 30 Mathematical Programming Relaxation and decomposition methods Linear programming relaxation: • Ignore the integrality constraint of an IP. • Handy for finding lower bounds. Mathematical Programming Relaxation and decomposition methods Lagrangian relaxation: • Move a constraint into the objective function (and weigh it with ) Mathematical Programming Relaxation and decomposition methods Benders Decomposition: • A lot, but basically divide into master and sub- problem and use sub-problem to generate new constraints to master. Mathematical Programming Cutting plane and pricing algorithms Cutting planes: Add constraints to relaxed IP to generate tighter IP relaxations. http://mat.gsia.cmu.edu/orclass/integer/node14.html Mathematical Programming Cutting plane and pricing algorithms Column generation (pricing): Master problem (without all variables) and sub-problem (that generates variables that are likely to be useful to the Master). Master Duals New columns Subproblem Low-level relay hybrid S-metaheuristics in exact methods Exact methods in S-metaheuristics Low-level Relay Hybrid S-metaheuristics in exact methods Branch and Bound: 1. Node selection strategy: Which node should we branch on next? How? Use problem knowledge and S-metaheuristics. 2. Upper bound generation: Use S-metaheuristics to make complete solution from partial at nodes. Cutting planes and Pricing algorithms: 2. Cut generation: Extract set of violated capacity constraints of the relaxed problem. 3. Generate new variables (columns): what it says. Low-level Relay Hybrid Exact methods in S-metaheuristics 1. Very large neighbourhoods: use mathematical programming to search them and find the best or improving solution in the (whole or subset) neighbourhood (B&B, DP, Network Flow algorithms, matching algorithms). 2. Lower Bounds: use in S-metaheuristic. May also use other information e.g. Lagrangian multipliers. Low-level Teamwork hybrid P-metaheuristics in exact methods Exact methods in P-metaheuristics Low-level Teamwork Hybrid P-metaheuristics in exact methods Branching • Which node to branch on? Which values to use in branching? Which node to solve next? • Genetic algorithms have been used for solving the node selection problem. Low-level Teamwork Hybrid P-metaheuristics in exact methods Local Branching (sort of like local search in MIP). Local branching can be used for binary variables. Assumes s is a partial solution (of the binaries in the problem). Then let the optimization algorithm solve the problem for the k-opt neighbourhood of s, and if no better solution is found, solve it for the other branch (change k or more variables). Low-level Teamwork Hybrid Exact methods in p-metaheuristics Recombination, mutation: • Use an exact algorithm for combining parents into new children, mutation. Exact decoding: • Use exact algorithms for generating a solution from the incomplete encoding used by the heuristics. Low-level Teamwork Hybrid Exact methods in p-metaheuristics Exact search ingredients: • Lower bounds: Gilmore-Lawler lower bounds and dual variable values have been used in the construction phase of ant colony to solve the quadratic assignment problem. Also, lower bounds have been used in EA for mutation/crossover, where solutions that exceed a given bound are deleted. • Partial solutions: The partial solutions of Branch and Bound may be interesting initial solutions. High-level relay hybrid Information provided by metaheuristics Information provided by exact methods High-level Relay Hybrid Information provided by metaheuristics 1. Good upper bound: Solve the problem using a heuristic and start from that heuristic. Helps you prune more. 2. Fixing variables: Partition variables into set X and Y, let metaheuristic fix variables in set X and solve for variables in Y using exact methods. 3. Domain reduction: Heuristically perform a domain reduction for the decision variables and solve exactly within the reduced domain. High-level Relay Hybrid Information provided by exact algorithms 1. Partial solutions: Complete with metaheuristic. 2. Problem reduction: Used tree-search to reduce scheduling problem size and then used tabu search with simplified objective function to fins schedule. 3. Relaxed optimal solution and their duals: May be exploited by metaheuristics. High-level teamwork hybrid Parallel cooperation Specialist cooperation High-level Teamwork Hybrid Parallel Cooperation Partial solution Branch and bound (new initial solution) S-metaheuristic algorithm Initial solution Non-explored nodes Upper bound Best solution found Best solution (new upper bound) E.g. combining Branch and Bound with Simulated Annealing. The SA algorithm send improved upper bounds to B&B, and any integer solution found by B&B is used as an alternative reheated solution. High-level teamwork hybrid Specialist cooperation Branch and X Optimal solution S-metaheuristic for subproblems, lower bounds Subproblems, upper bound and so on Communication medium Subpopulation of solutions Optimal solutions for relaxed LP solver problems and duals P-metaheuristic Partial solutions, and so on E.g. local search used to generate new columns. 5.2 Combining metaheuristics with Constraint Programming Constraint Programming Hybrids Constraint Programming CP vs MH Propagation and search Constraint programming Constraint programming vs. Metaheuristics Constraint Programming: good at solving combinatorial optimization problems that are tightly constrained. Metaheuristics: good at solving problems that are underconstrained. Constraint programming Search and propagate Constraints: Constraint programs consists of a set of variables linked by a set of constraints, e.g. the all_different constraint. The problem is solved by iterating between: 1. Propagate: Remove all values from variable domains that can’t be in a feasible solution. There’s lot’s of code for this already. 2. Search: Tree search, partition problem into sub-problems by adding new constraints. You’ll probably need to write your own code for this. 1. Branch ordering? 2. Variable selection? Hybrids Low-level relay hybrid Low-level teamwork hybrid High-leve relay hybrid High-level teamwork hybrid Hybrids Low-level relay hybrid • Neighbourhoods with expensive testing of feasibility: CP is very good at feasibly! • Large neighbourhoods: Once again, CP is good at feasibility…? Hybrids Low-level teamwork hybrids Embedding metaheuristics into Constraint Programming: • Node improvement: Use metaheuristics to improve or repair the nodes of the search tree. • Discrepancy-based search algorithms: Generate near- greedy paths in a serch tree. • Branch-ordering: Use metaheuristic to decide which child node to investigate first. • Variable selection: Choose which branches to make. • Branching restriction: Filter the branches of the search tree. Hybrids Low-level teamwork hybrids Embedding constraint programming into metaheuristics: • Recombination, large neighbourhoods, lower bounds, partial solutions, decoders… Hybrids High-level relay hybrids Information provided by metaheuristic: • Same as for when combining with MP. Information provided by CP: • Same as for when combining with MP. Hybrids High-level teamwork hybrids CP Best neighbour, optimal solutions for sub-problems, domain reductions, and so on Communication Metaheuristic medium Sub-problems, upper bound, constraints and so on Optimal solutions for relaxed problems MP and duals, partial solutions, lower bounds and so on Not that many. www.sics.se.
Recommended publications
  • Hybridizations of Metaheuristics with Branch & Bound Derivates
    Hybridizations of Metaheuristics With Branch & Bound Derivates Christian Blum1, Carlos Cotta2, Antonio J. Fern´andez2,Jos´e E. Gallardo2, and Monaldo Mastrolilli3 1 ALBCOM research group Universitat Polit`ecnica de Catalunya [email protected] 2 Dept. Lenguajes y Ciencias de la Computaci´on Universidad de M´alaga {ccottap,afdez,pepeg}@lcc.uma.es 3 Istituto Dalle Molle di Studi sull’Intelligenza Artificiale (IDSIA) [email protected] Summary. An important branch of hybrid metaheuristics concerns the hybridiza- tion with branch & bound derivatives. In this chapter we present examples for two different types of hybridization. The first one concerns the use of branch & bound fea- tures within construction-based metaheuristics in order to increase their efficiancy. The second example deals with the use of a metaheuristic, in our case a memetic algorithm, in order to increase the efficiancy of branch & bound, respectively branch & bound derivatives such as beam search. The quality of the resulting hybrid tech- niques is demonstrated by means of the application to classical string problems: the longest common subsequence problem and the shortest common supersequence problem. 1 Introduction One of the basic ingredients of an optimization technique is a mechanism for exploring the search space, that is, the space of valid solutions to the con- sidered optimization problem. Algorithms belonging to the important class of constructive optimization techniques tackle an optimization problem by ex- ploring the search space in form of a tree, a so-called search tree.Thesearch tree is generally defined by an underlying solution construction mechanism. Each path from the root node of the search tree to one of the leaves corre- sponds to the process of constructing a candidate solution.
    [Show full text]
  • Metaheuristics1
    METAHEURISTICS1 Kenneth Sörensen University of Antwerp, Belgium Fred Glover University of Colorado and OptTek Systems, Inc., USA 1 Definition A metaheuristic is a high-level problem-independent algorithmic framework that provides a set of guidelines or strategies to develop heuristic optimization algorithms (Sörensen and Glover, To appear). Notable examples of metaheuristics include genetic/evolutionary algorithms, tabu search, simulated annealing, and ant colony optimization, although many more exist. A problem-specific implementation of a heuristic optimization algorithm according to the guidelines expressed in a metaheuristic framework is also referred to as a metaheuristic. The term was coined by Glover (1986) and combines the Greek prefix meta- (metá, beyond in the sense of high-level) with heuristic (from the Greek heuriskein or euriskein, to search). Metaheuristic algorithms, i.e., optimization methods designed according to the strategies laid out in a metaheuristic framework, are — as the name suggests — always heuristic in nature. This fact distinguishes them from exact methods, that do come with a proof that the optimal solution will be found in a finite (although often prohibitively large) amount of time. Metaheuristics are therefore developed specifically to find a solution that is “good enough” in a computing time that is “small enough”. As a result, they are not subject to combinatorial explosion – the phenomenon where the computing time required to find the optimal solution of NP- hard problems increases as an exponential function of the problem size. Metaheuristics have been demonstrated by the scientific community to be a viable, and often superior, alternative to more traditional (exact) methods of mixed- integer optimization such as branch and bound and dynamic programming.
    [Show full text]
  • A Lifted Linear Programming Branch-And-Bound Algorithm for Mixed Integer Conic Quadratic Programs Juan Pablo Vielma, Shabbir Ahmed, George L
    A Lifted Linear Programming Branch-and-Bound Algorithm for Mixed Integer Conic Quadratic Programs Juan Pablo Vielma, Shabbir Ahmed, George L. Nemhauser, H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, 765 Ferst Drive NW, Atlanta, GA 30332-0205, USA, {[email protected], [email protected], [email protected]} This paper develops a linear programming based branch-and-bound algorithm for mixed in- teger conic quadratic programs. The algorithm is based on a higher dimensional or lifted polyhedral relaxation of conic quadratic constraints introduced by Ben-Tal and Nemirovski. The algorithm is different from other linear programming based branch-and-bound algo- rithms for mixed integer nonlinear programs in that, it is not based on cuts from gradient inequalities and it sometimes branches on integer feasible solutions. The algorithm is tested on a series of portfolio optimization problems. It is shown that it significantly outperforms commercial and open source solvers based on both linear and nonlinear relaxations. Key words: nonlinear integer programming; branch and bound; portfolio optimization History: February 2007. 1. Introduction This paper deals with the development of an algorithm for the class of mixed integer non- linear programming (MINLP) problems known as mixed integer conic quadratic program- ming problems. This class of problems arises from adding integrality requirements to conic quadratic programming problems (Lobo et al., 1998), and is used to model several applica- tions from engineering and finance. Conic quadratic programming problems are also known as second order cone programming problems, and together with semidefinite and linear pro- gramming (LP) problems are special cases of the more general conic programming problems (Ben-Tal and Nemirovski, 2001a).
    [Show full text]
  • A Branch-And-Price Approach with Milp Formulation to Modularity Density Maximization on Graphs
    A BRANCH-AND-PRICE APPROACH WITH MILP FORMULATION TO MODULARITY DENSITY MAXIMIZATION ON GRAPHS KEISUKE SATO Signalling and Transport Information Technology Division, Railway Technical Research Institute. 2-8-38 Hikari-cho, Kokubunji-shi, Tokyo 185-8540, Japan YOICHI IZUNAGA Information Systems Research Division, The Institute of Behavioral Sciences. 2-9 Ichigayahonmura-cho, Shinjyuku-ku, Tokyo 162-0845, Japan Abstract. For clustering of an undirected graph, this paper presents an exact algorithm for the maximization of modularity density, a more complicated criterion to overcome drawbacks of the well-known modularity. The problem can be interpreted as the set-partitioning problem, which reminds us of its integer linear programming (ILP) formulation. We provide a branch-and-price framework for solving this ILP, or column generation combined with branch-and-bound. Above all, we formulate the column gen- eration subproblem to be solved repeatedly as a simpler mixed integer linear programming (MILP) problem. Acceleration tech- niques called the set-packing relaxation and the multiple-cutting- planes-at-a-time combined with the MILP formulation enable us to optimize the modularity density for famous test instances in- cluding ones with over 100 vertices in around four minutes by a PC. Our solution method is deterministic and the computation time is not affected by any stochastic behavior. For one of them, column generation at the root node of the branch-and-bound tree arXiv:1705.02961v3 [cs.SI] 27 Jun 2017 provides a fractional upper bound solution and our algorithm finds an integral optimal solution after branching. E-mail addresses: (Keisuke Sato) [email protected], (Yoichi Izunaga) [email protected].
    [Show full text]
  • Linear Programming
    Lecture 18 Linear Programming 18.1 Overview In this lecture we describe a very general problem called linear programming that can be used to express a wide variety of different kinds of problems. We can use algorithms for linear program- ming to solve the max-flow problem, solve the min-cost max-flow problem, find minimax-optimal strategies in games, and many other things. We will primarily discuss the setting and how to code up various problems as linear programs At the end, we will briefly describe some of the algorithms for solving linear programming problems. Specific topics include: • The definition of linear programming and simple examples. • Using linear programming to solve max flow and min-cost max flow. • Using linear programming to solve for minimax-optimal strategies in games. • Algorithms for linear programming. 18.2 Introduction In the last two lectures we looked at: — Bipartite matching: given a bipartite graph, find the largest set of edges with no endpoints in common. — Network flow (more general than bipartite matching). — Min-Cost Max-flow (even more general than plain max flow). Today, we’ll look at something even more general that we can solve algorithmically: linear pro- gramming. (Except we won’t necessarily be able to get integer solutions, even when the specifi- cation of the problem is integral). Linear Programming is important because it is so expressive: many, many problems can be coded up as linear programs (LPs). This especially includes problems of allocating resources and business 95 18.3. DEFINITION OF LINEAR PROGRAMMING 96 supply-chain applications. In business schools and Operations Research departments there are entire courses devoted to linear programming.
    [Show full text]
  • Integer Linear Programs
    20 ________________________________________________________________________________________________ Integer Linear Programs Many linear programming problems require certain variables to have whole number, or integer, values. Such a requirement arises naturally when the variables represent enti- ties like packages or people that can not be fractionally divided — at least, not in a mean- ingful way for the situation being modeled. Integer variables also play a role in formulat- ing equation systems that model logical conditions, as we will show later in this chapter. In some situations, the optimization techniques described in previous chapters are suf- ficient to find an integer solution. An integer optimal solution is guaranteed for certain network linear programs, as explained in Section 15.5. Even where there is no guarantee, a linear programming solver may happen to find an integer optimal solution for the par- ticular instances of a model in which you are interested. This happened in the solution of the multicommodity transportation model (Figure 4-1) for the particular data that we specified (Figure 4-2). Even if you do not obtain an integer solution from the solver, chances are good that you’ll get a solution in which most of the variables lie at integer values. Specifically, many solvers are able to return an ‘‘extreme’’ solution in which the number of variables not lying at their bounds is at most the number of constraints. If the bounds are integral, all of the variables at their bounds will have integer values; and if the rest of the data is integral, many of the remaining variables may turn out to be integers, too.
    [Show full text]
  • Evaluation of Emerging Metaheuristic Strategies on Opimal Transmission Pricing
    Evaluation of Emerging Metaheuristic Strategies on Opimal Transmission Pricing José L. Rueda, Senior Member, IEEE István Erlich, Senior Member, IEEE Institute of Electrical Power Systems Institute of Electrical Power Systems University Duisburg-Essen University Duisburg-Essen Duisburg, Germany Duisburg, Germany [email protected] [email protected] Abstract--This paper provides a comparative assessment of the In practice, there are several factors that can influence the capabilities of three metaheuristic algorithms for solving the adoption of a particular scheme of transmission pricing. Thus, problem of optimal transmission pricing, whose formulation is existing literature on definition of pricing mechanisms is vast. based on principle of equivalent bilateral exchanges. Among the Particularly, the principle of Equivalent Bilateral Exchange selected algorithms are Covariance Matrix Adaptation (EBE), which was originally proposed in [5], has proven to be Evolution Strategy (CMA-ES), Linearized Biogeography-based useful for pool system by providing suitable price signals Optimization (LBBO), and a novel swarm variant of the Mean- reflecting variability in the usage rates and charges across Variance Mapping Optimization (MVMO-SM). The IEEE 30 transmission network. In [6], an optimization problem was bus system is used to perform numerical comparisons on devised based on EBE in order enable exploration of multiple convergence speed, achieved optimum solutions, and computing solutions in deciding equivalent bilateral exchanges. In the effort. 2011 competition on testing evolutionary algorithms for real- Index Terms--Equivalent bilateral exchanges, evolutionary world optimization problems (CEC11), this problem was mechanism, metaheuristics, transmission pricing. solved by different metaheuristic algorithms, most of them constituting extended or hybridized variants of genetic I.
    [Show full text]
  • Heuristic Search Viewed As Path Finding in a Graph
    ARTIFICIAL INTELLIGENCE 193 Heuristic Search Viewed as Path Finding in a Graph Ira Pohl IBM Thomas J. Watson Research Center, Yorktown Heights, New York Recommended by E. J. Sandewall ABSTRACT This paper presents a particular model of heuristic search as a path-finding problem in a directed graph. A class of graph-searching procedures is described which uses a heuristic function to guide search. Heuristic functions are estimates of the number of edges that remain to be traversed in reaching a goal node. A number of theoretical results for this model, and the intuition for these results, are presented. They relate the e])~ciency of search to the accuracy of the heuristic function. The results also explore efficiency as a consequence of the reliance or weight placed on the heuristics used. I. Introduction Heuristic search has been one of the important ideas to grow out of artificial intelligence research. It is an ill-defined concept, and has been used as an umbrella for many computational techniques which are hard to classify or analyze. This is beneficial in that it leaves the imagination unfettered to try any technique that works on a complex problem. However, leaving the con. cept vague has meant that the same ideas are rediscovered, often cloaked in other terminology rather than abstracting their essence and understanding the procedure more deeply. Often, analytical results lead to more emcient procedures. Such has been the case in sorting [I] and matrix multiplication [2], and the same is hoped for this development of heuristic search. This paper attempts to present an overview of recent developments in formally characterizing heuristic search.
    [Show full text]
  • Chapter 11: Basic Linear Programming Concepts
    CHAPTER 11: BASIC LINEAR PROGRAMMING CONCEPTS Linear programming is a mathematical technique for finding optimal solutions to problems that can be expressed using linear equations and inequalities. If a real-world problem can be represented accurately by the mathematical equations of a linear program, the method will find the best solution to the problem. Of course, few complex real-world problems can be expressed perfectly in terms of a set of linear functions. Nevertheless, linear programs can provide reasonably realistic representations of many real-world problems — especially if a little creativity is applied in the mathematical formulation of the problem. The subject of modeling was briefly discussed in the context of regulation. The regulation problems you learned to solve were very simple mathematical representations of reality. This chapter continues this trek down the modeling path. As we progress, the models will become more mathematical — and more complex. The real world is always more complex than a model. Thus, as we try to represent the real world more accurately, the models we build will inevitably become more complex. You should remember the maxim discussed earlier that a model should only be as complex as is necessary in order to represent the real world problem reasonably well. Therefore, the added complexity introduced by using linear programming should be accompanied by some significant gains in our ability to represent the problem and, hence, in the quality of the solutions that can be obtained. You should ask yourself, as you learn more about linear programming, what the benefits of the technique are and whether they outweigh the additional costs.
    [Show full text]
  • Cooperative and Adaptive Algorithms Lecture 6 Allaa (Ella) Hilal, Spring 2017 May, 2017 1 Minute Quiz (Ungraded)
    Cooperative and Adaptive Algorithms Lecture 6 Allaa (Ella) Hilal, Spring 2017 May, 2017 1 Minute Quiz (Ungraded) • Select if these statement are true (T) or false (F): Statement T/F Reason Uniform-cost search is a special case of Breadth- first search Breadth-first search, depth- first search and uniform- cost search are special cases of best- first search. A* is a special case of uniform-cost search. ECE457A, Dr. Allaa Hilal, Spring 2017 2 1 Minute Quiz (Ungraded) • Select if these statement are true (T) or false (F): Statement T/F Reason Uniform-cost search is a special case of Breadth- first F • Breadth- first search is a special case of Uniform- search cost search when all step costs are equal. Breadth-first search, depth- first search and uniform- T • Breadth-first search is best-first search with f(n) = cost search are special cases of best- first search. depth(n); • depth-first search is best-first search with f(n) = - depth(n); • uniform-cost search is best-first search with • f(n) = g(n). A* is a special case of uniform-cost search. F • Uniform-cost search is A* search with h(n) = 0. ECE457A, Dr. Allaa Hilal, Spring 2017 3 Informed Search Strategies Hill Climbing Search ECE457A, Dr. Allaa Hilal, Spring 2017 4 Hill Climbing Search • Tries to improve the efficiency of depth-first. • Informed depth-first algorithm. • An iterative algorithm that starts with an arbitrary solution to a problem, then attempts to find a better solution by incrementally changing a single element of the solution. • It sorts the successors of a node (according to their heuristic values) before adding them to the list to be expanded.
    [Show full text]
  • Chapter 7. Linear Programming and Reductions
    Chapter 7 Linear programming and reductions Many of the problems for which we want algorithms are optimization tasks: the shortest path, the cheapest spanning tree, the longest increasing subsequence, and so on. In such cases, we seek a solution that (1) satisfies certain constraints (for instance, the path must use edges of the graph and lead from s to t, the tree must touch all nodes, the subsequence must be increasing); and (2) is the best possible, with respect to some well-defined criterion, among all solutions that satisfy these constraints. Linear programming describes a broad class of optimization tasks in which both the con- straints and the optimization criterion are linear functions. It turns out an enormous number of problems can be expressed in this way. Given the vastness of its topic, this chapter is divided into several parts, which can be read separately subject to the following dependencies. Flows and matchings Introduction to linear programming Duality Games and reductions Simplex 7.1 An introduction to linear programming In a linear programming problem we are given a set of variables, and we want to assign real values to them so as to (1) satisfy a set of linear equations and/or linear inequalities involving these variables and (2) maximize or minimize a given linear objective function. 201 202 Algorithms Figure 7.1 (a) The feasible region for a linear program. (b) Contour lines of the objective function: x1 + 6x2 = c for different values of the profit c. x x (a) 2 (b) 2 400 400 Optimum point Profit = $1900 ¡ ¡ ¡ ¡ ¡ 300 ¢¡¢¡¢¡¢¡¢ 300 ¡ ¡ ¡ ¡ ¡ ¢¡¢¡¢¡¢¡¢ ¡ ¡ ¡ ¡ ¡ ¢¡¢¡¢¡¢¡¢ 200 200 c = 1500 ¡ ¡ ¡ ¡ ¡ ¢¡¢¡¢¡¢¡¢ ¡ ¡ ¡ ¡ ¡ ¢¡¢¡¢¡¢¡¢ c = 1200 ¡ ¡ ¡ ¡ ¡ 100 ¢¡¢¡¢¡¢¡¢ 100 ¡ ¡ ¡ ¡ ¡ ¢¡¢¡¢¡¢¡¢ c = 600 ¡ ¡ ¡ ¡ ¡ ¢¡¢¡¢¡¢¡¢ x1 x1 0 100 200 300 400 0 100 200 300 400 7.1.1 Example: profit maximization A boutique chocolatier has two products: its flagship assortment of triangular chocolates, called Pyramide, and the more decadent and deluxe Pyramide Nuit.
    [Show full text]
  • Linear Programming
    Stanford University | CS261: Optimization Handout 5 Luca Trevisan January 18, 2011 Lecture 5 In which we introduce linear programming. 1 Linear Programming A linear program is an optimization problem in which we have a collection of variables, which can take real values, and we want to find an assignment of values to the variables that satisfies a given collection of linear inequalities and that maximizes or minimizes a given linear function. (The term programming in linear programming, is not used as in computer program- ming, but as in, e.g., tv programming, to mean planning.) For example, the following is a linear program. maximize x1 + x2 subject to x + 2x ≤ 1 1 2 (1) 2x1 + x2 ≤ 1 x1 ≥ 0 x2 ≥ 0 The linear function that we want to optimize (x1 + x2 in the above example) is called the objective function.A feasible solution is an assignment of values to the variables that satisfies the inequalities. The value that the objective function gives 1 to an assignment is called the cost of the assignment. For example, x1 := 3 and 1 2 x2 := 3 is a feasible solution, of cost 3 . Note that if x1; x2 are values that satisfy the inequalities, then, by summing the first two inequalities, we see that 3x1 + 3x2 ≤ 2 that is, 1 2 x + x ≤ 1 2 3 2 1 1 and so no feasible solution has cost higher than 3 , so the solution x1 := 3 , x2 := 3 is optimal. As we will see in the next lecture, this trick of summing inequalities to verify the optimality of a solution is part of the very general theory of duality of linear programming.
    [Show full text]