Integer Programming (Part 1)

Total Page:16

File Type:pdf, Size:1020Kb

Integer Programming (Part 1) Integer programming (part 1) Lecturer: Javier Pe~na Convex Optimization 10-725/36-725 Coordinate descent Pn Assume f(x) = g(x) + i=1 hi(xi), with g convex, differentiable and each hi convex. To minimize f: start with some initial guess x(0), and repeat (k) (k−1) (k−1) (k−1) x1 2 argmin f x1; x2 ; x3 ; : : : xn x1 (k) (k) (k−1) (k−1) x2 2 argmin f x1 ; x2; x3 ; : : : xn x2 (k) (k) (k) (k−1) x3 2 argmin f x1 ; x2 ; x3; : : : xn x2 ::: (k) (k) (k) (k) xn 2 argmin f x1 ; x2 ; x3 ; : : : xn x2 for k = 1; 2; 3;:::. 2 Outline Today: • (Mixed) integer programming • Examples • Solution techniques: I relaxation I branch and bound I cutting planes 3 (Mixed) integer program Optimization model where some variables are restricted to be integer min f(x) x subject to x 2 C xj 2 Z; j 2 J n n where f : R ! R;C ⊆ R and J ⊆ f1; : : : ; ng: When J = f1; : : : ; ng, the above problem is a pure integer program. Throughout our discussion assume f and C are convex. 4 Special case: binary variables In some cases the variables of an integer program represent yes/no decisions or logical variables. These kinds of decisions can be encoded via binary variables that take values 0 or 1. Combinatorial optimization A combinatorial optimization problem is a triple (N; F; c) where • N is a finite ground set • F ⊆ 2N is a set of feasible solutions N • c 2 R is a cost function The goal is to solve X min ci S2F i2S Many combinatorial optimization problems can be written as binary integer programs. 5 Knapsack problem Determine the most valuable items to take in a limited volume knapsack. max cTx x subject to aTx ≤ b xj 2 f0; 1g; j = 1; : : : ; n here cj and aj are the value and volume of item j and b is the volume of the knapsack. 6 Assignment problem There are n people available to carry n jobs. Each person can be assigned to exactly one job. There is a cost cij if person i is assigned to job j. Find minimum cost assignment. n n X X min cijxij x i=1 j=1 n X xij = 1; j = 1; : : : ; n i=1 n X xij = 1; i = 1; : : : ; n j=1 xij 2 f0; 1g; i = 1; : : : ; n; j = 1; : : : ; n: 7 Facility location There are N = f1; : : : ; ng depots and M = f1; : : : ; mg clients. Fixed cost fj associated to the use of depot j Transportation cost cij if client i is served from depot j. Determine what depots to open and what clients each depot serves to minimize the sum of fixed and transportation costs. n m n X X X min fjyj + cijxij x;y j=1 i=1 j=1 n X xij = 1; i = 1; : : : ; n j=1 xij ≤ yj; i = 1; : : : ; m; j = 1; : : : ; n xij 2 f0; 1g; i = 1; : : : ; m; j = 1; : : : ; n yj 2 f0; 1g; j = 1; : : : ; n 8 Facility location (alternative formulation) Since all variables are binary, the mn constraints xij ≤ yj; i = 1; : : : ; m; j = 1; : : : ; n can be replaced by the n constraints m X xij ≤ myj; j = 1; : : : ; n i=1 Alternative formulation n m n X X X min fjyj + cijxij x;y j=1 i=1 j=1 n X xij = 1; i = 1; : : : ; n j=1 m X xij ≤ myj; j = 1; : : : ; n i=1 xij 2 f0; 1g; i = 1; : : : ; m; j = 1; : : : ; n yj 2 f0; 1g; j = 1; : : : ; n 9 K-means and K-medoids clustering (1) (n) d Assume x ; : : : ; x 2 R . K-means Find partition S1 [···[ SK = f1; : : : ; ng that minimizes K X X kx(j) − µ(i)k2 i=1 j2Si where µ(i) := 1 P x(i); centroid of cluster i. jSij j2Si K-medoids Find partition S1 [···[ SK = f1; : : : ; ng and select (i) (j) y 2 fx : j 2 Sig; i = 1;:::;K to minimize K X X kx(j) − y(i)k2 i=1 j2Si 10 Best subset selection 1 p n×p n Assume X = x ··· x 2 R and y 2 R . Best subset selection problem: 1 2 min ky − Xβk2 β 2 subject to kβk0 ≤ k Here kβk0 := number of nonzero entries of β. Can you give an integer programming formulation to this problem? 11 Least median of squares regression 1 p n×p n Assume X = x ··· x 2 R and y 2 R . p Given β 2 R let r := y − Xβ Observe X 2 • Least squares (LS): βLS := argmin ri β i X • Least absolute deviation (LAD): βLAD = argmin jrij β i Least Median of Squares (LMS) βLMS := argmin(medianjrij): β Can you give an integer programming formulation for LMS? 12 How hard is integer programming? • Solving integer programs is much more difficult than solving convex optimization problems. • Integer programming is NP-hard. There are no known polynomial-time algorithms for solving integer programs. • Solving the associated convex relaxation (ignoring integrality constraints) results in an lower bound on the optimal value. • The convex relaxation may only convey limited information: I Rounding to a feasible integer solution may be difficult I The optimal solution to the relaxation can be arbitrarily far away from the optimal solution to the integer program I Rounding may result in a solution far from optimal 13 Techniques for solving integer programs Consider an integer program z := min f(x) x2X (Assume X includes both convex and integrality constraints.) Unlike convex optimization, there are no straightforward \optimality conditions" to verify that a feasible point x? 2 X is optimal. A naive alternative: find a lower bound z ≤ z and an upper bound z ≥ z with z = z. 14 Techniques for solving integer programs Consider an integer program z := min f(x) x2X Algorithmic template Find a decreasing sequence of upper bounds z1 ≥ z2 ≥ · · · zs ≥ z and an increasing sequence of lower bounds z1 ≤ z2 ≤ · · · zt ≤ z stop when zs − zt ≤ for some specified tolerance > 0: 15 Primal and dual bounds How can we find upper and lower bounds for the problem z := min f(x) x2X Primal bounds Any feasible x 2 X yields an upper bound f(x) ≥ z. In some problems it is easy to find feasible solutions but this is not always the case. Dual bounds Finding lower bounds poses a different challenge. They are often called \dual" for reasons that will become apparent soon. The most commonly used lower bounds are via relaxations. 16 Relaxations We say that the problem min g(x) x2Y is a relaxation of the problem min f(x) x2X if • X ⊆ Y • g(x) ≤ f(x) for all x 2 X Observe that the optimal value of a relaxation is a lower bound on the optimal value of the original problem. 17 Convex relaxations Consider the problem min f(x) x subject to x 2 C xj 2 Z; j 2 J n n where f : R ! R;C ⊆ R are convex and J ⊆ f1; : : : ; ng: Convex relaxation: min f(x) x subject to x 2 C 18 Lagrangian relaxations Consider a problem of the form min f(x) x subject to Ax ≤ b x 2 X If this problem is difficult, consider shifting some of the constraints to the objective: For u ≥ 0 consider the Lagrangian relaxation L(u) := min f(x) + uT(Ax − b) x subject to x 2 X Observe that L(u) ≤ z for all u ≥ 0. The best (highest) such bound can be obtained by solving the dual problem max L(u) u≥0 Observe that the dual is a concave maximization problem. 19 Lagrangian relaxation for facility location Recall the facility location problem n m n X X X min fjyj + cijxij x;y j=1 i=1 j=1 n X xij = 1; i = 1; : : : ; n j=1 xij ≤ yj; i = 1; : : : ; m; j = 1; : : : ; n xij; yj 2 f0; 1g; i = 1; : : : ; m; j = 1; : : : ; n Lagrangian relaxation: for unrestricted v n m n m X X X X L(v) := min fjyj + (cij − vi)xij + vi x;y j=1 i=1 j=1 i=1 xij ≤ yj; i = 1; : : : ; m; j = 1; : : : ; n xij; yj 2 f0; 1g; i = 1; : : : ; m; j = 1; : : : ; n 20 Lagrangian relaxation for facility location For each v, the Lagrangian relaxation L(v) is easily solvable: 1 if c − v < 0 and P (c − v )− + f < 0 x (v) = ij i ` `j ` j ij 0 otherwise 1 if P (c − v )− + f < 0 y (v) = ` `j ` j j 0 otherwise. This gives both a lower bound L(v) and a heuristic primal solution. Furthermore, the subdifferential of −L(v) is easy to compute. Thus we can use a subgradient method to solve max L(v) , min −L(v): v v 21 Branch and bound (B&B) This is the most common algorithm for solving integer programs. It is a divide and conquer approach. Let X = X1 [ X2 [···[ Xk be a partition of X. Thus min f(x) = min fmin f(x)g: x2X i=1;:::;k x2Xi Observe • A feasible solution to any of the subproblems yields an upper bound u(X) on the original problem. • Key idea: obtain a lower bound `(Xi) for each min f(x). x2Xi • If `(Xi) ≥ u(X) then we do not need to consider min f(x). x2Xi 22 Branch and bound algorithm Consider the problem min f(x) x subject to x 2 C (IP) xj 2 Z; j 2 J n n where f : R ! R;C ⊆ R are convex and J ⊆ f1; : : : ; ng: 1.
Recommended publications
  • 4. Convex Optimization Problems
    Convex Optimization — Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form • convex optimization problems • quasiconvex optimization • linear optimization • quadratic optimization • geometric programming • generalized inequality constraints • semidefinite programming • vector optimization • 4–1 Optimization problem in standard form minimize f0(x) subject to f (x) 0, i =1,...,m i ≤ hi(x)=0, i =1,...,p x Rn is the optimization variable • ∈ f : Rn R is the objective or cost function • 0 → f : Rn R, i =1,...,m, are the inequality constraint functions • i → h : Rn R are the equality constraint functions • i → optimal value: p⋆ = inf f (x) f (x) 0, i =1,...,m, h (x)=0, i =1,...,p { 0 | i ≤ i } p⋆ = if problem is infeasible (no x satisfies the constraints) • ∞ p⋆ = if problem is unbounded below • −∞ Convex optimization problems 4–2 Optimal and locally optimal points x is feasible if x dom f and it satisfies the constraints ∈ 0 ⋆ a feasible x is optimal if f0(x)= p ; Xopt is the set of optimal points x is locally optimal if there is an R> 0 such that x is optimal for minimize (over z) f0(z) subject to fi(z) 0, i =1,...,m, hi(z)=0, i =1,...,p z x≤ R k − k2 ≤ examples (with n =1, m = p =0) f (x)=1/x, dom f = R : p⋆ =0, no optimal point • 0 0 ++ f (x)= log x, dom f = R : p⋆ = • 0 − 0 ++ −∞ f (x)= x log x, dom f = R : p⋆ = 1/e, x =1/e is optimal • 0 0 ++ − f (x)= x3 3x, p⋆ = , local optimum at x =1 • 0 − −∞ Convex optimization problems 4–3 Implicit constraints the standard form optimization problem has an implicit
    [Show full text]
  • Metaheuristics1
    METAHEURISTICS1 Kenneth Sörensen University of Antwerp, Belgium Fred Glover University of Colorado and OptTek Systems, Inc., USA 1 Definition A metaheuristic is a high-level problem-independent algorithmic framework that provides a set of guidelines or strategies to develop heuristic optimization algorithms (Sörensen and Glover, To appear). Notable examples of metaheuristics include genetic/evolutionary algorithms, tabu search, simulated annealing, and ant colony optimization, although many more exist. A problem-specific implementation of a heuristic optimization algorithm according to the guidelines expressed in a metaheuristic framework is also referred to as a metaheuristic. The term was coined by Glover (1986) and combines the Greek prefix meta- (metá, beyond in the sense of high-level) with heuristic (from the Greek heuriskein or euriskein, to search). Metaheuristic algorithms, i.e., optimization methods designed according to the strategies laid out in a metaheuristic framework, are — as the name suggests — always heuristic in nature. This fact distinguishes them from exact methods, that do come with a proof that the optimal solution will be found in a finite (although often prohibitively large) amount of time. Metaheuristics are therefore developed specifically to find a solution that is “good enough” in a computing time that is “small enough”. As a result, they are not subject to combinatorial explosion – the phenomenon where the computing time required to find the optimal solution of NP- hard problems increases as an exponential function of the problem size. Metaheuristics have been demonstrated by the scientific community to be a viable, and often superior, alternative to more traditional (exact) methods of mixed- integer optimization such as branch and bound and dynamic programming.
    [Show full text]
  • Convex Optimisation Matlab Toolbox
    UNLOCBOX: USER’S GUIDE MATLAB CONVEX OPTIMIZATION TOOLBOX Lausanne - February 2014 PERRAUDIN Nathanaël, KALOFOLIAS Vassilis LTS2 - EPFL Abstract Nowadays the trend to solve optimization problems is to use specific algorithms rather than very gen- eral ones. The UNLocBoX provides a general framework allowing the user to design his own algorithms. To do so, the framework try to stay as close from the mathematical problem as possible. More precisely, the UNLocBoX is a Matlab toolbox designed to solve convex optimization problem of the form K min ∑ fn(x); x2C n=1 using proximal splitting techniques. It is mainly composed of solvers, proximal operators and demonstra- tion files allowing the user to quickly implement a problem. Contents 1 Introduction 1 2 Literature 2 3 Installation and initialization2 3.1 Dependencies.........................................2 3.2 GPU computation.......................................2 4 Structure of the UNLocBoX2 5 Problems of interest4 6 Solvers 4 6.1 Defining functions......................................5 6.2 Selecting a solver.......................................5 6.3 Optional parameters......................................6 6.4 Plug-ins: how to tune your algorithm.............................6 6.5 Returned arguments......................................8 7 Proximal operators9 7.1 Constraints..........................................9 8 Example 10 References 16 LTS2 - EPFL 1 Introduction During your research, you have most likely faced the situation where you need to solve a convex optimiza- tion problem. Maybe you were trying to find an optimal combination of parameters, you were transforming some process into something automatic or, even more simply, your algorithm was equivalent to solving convex problem. In this situation, you usually face a dilemma: you need a specific algorithm to solve your problem, but you do not want to spend hours writing it.
    [Show full text]
  • A Branch-And-Price Approach with Milp Formulation to Modularity Density Maximization on Graphs
    A BRANCH-AND-PRICE APPROACH WITH MILP FORMULATION TO MODULARITY DENSITY MAXIMIZATION ON GRAPHS KEISUKE SATO Signalling and Transport Information Technology Division, Railway Technical Research Institute. 2-8-38 Hikari-cho, Kokubunji-shi, Tokyo 185-8540, Japan YOICHI IZUNAGA Information Systems Research Division, The Institute of Behavioral Sciences. 2-9 Ichigayahonmura-cho, Shinjyuku-ku, Tokyo 162-0845, Japan Abstract. For clustering of an undirected graph, this paper presents an exact algorithm for the maximization of modularity density, a more complicated criterion to overcome drawbacks of the well-known modularity. The problem can be interpreted as the set-partitioning problem, which reminds us of its integer linear programming (ILP) formulation. We provide a branch-and-price framework for solving this ILP, or column generation combined with branch-and-bound. Above all, we formulate the column gen- eration subproblem to be solved repeatedly as a simpler mixed integer linear programming (MILP) problem. Acceleration tech- niques called the set-packing relaxation and the multiple-cutting- planes-at-a-time combined with the MILP formulation enable us to optimize the modularity density for famous test instances in- cluding ones with over 100 vertices in around four minutes by a PC. Our solution method is deterministic and the computation time is not affected by any stochastic behavior. For one of them, column generation at the root node of the branch-and-bound tree arXiv:1705.02961v3 [cs.SI] 27 Jun 2017 provides a fractional upper bound solution and our algorithm finds an integral optimal solution after branching. E-mail addresses: (Keisuke Sato) [email protected], (Yoichi Izunaga) [email protected].
    [Show full text]
  • Conditional Gradient Sliding for Convex Optimization ∗
    CONDITIONAL GRADIENT SLIDING FOR CONVEX OPTIMIZATION ∗ GUANGHUI LAN y AND YI ZHOU z Abstract. In this paper, we present a new conditional gradient type method for convex optimization by calling a linear optimization (LO) oracle to minimize a series of linear functions over the feasible set. Different from the classic conditional gradient method, the conditional gradient sliding (CGS) algorithm developed herein can skip the computation of gradients from time to time, and as a result, can achieve the optimal complexity bounds in terms of not only the number of callsp to the LO oracle, but also the number of gradient evaluations. More specifically, we show that the CGS method requires O(1= ) and O(log(1/)) gradient evaluations, respectively, for solving smooth and strongly convex problems, while still maintaining the optimal O(1/) bound on the number of calls to LO oracle. We also develop variants of the CGS method which can achieve the optimal complexity bounds for solving stochastic optimization problems and an important class of saddle point optimization problems. To the best of our knowledge, this is the first time that these types of projection-free optimal first-order methods have been developed in the literature. Some preliminary numerical results have also been provided to demonstrate the advantages of the CGS method. Keywords: convex programming, complexity, conditional gradient method, Frank-Wolfe method, Nesterov's method AMS 2000 subject classification: 90C25, 90C06, 90C22, 49M37 1. Introduction. The conditional gradient (CndG) method, which was initially developed by Frank and Wolfe in 1956 [13] (see also [11, 12]), has been considered one of the earliest first-order methods for solving general convex programming (CP) problems.
    [Show full text]
  • Linear Programming Notes X: Integer Programming
    Linear Programming Notes X: Integer Programming 1 Introduction By now you are familiar with the standard linear programming problem. The assumption that choice variables are infinitely divisible (can be any real number) is unrealistic in many settings. When we asked how many chairs and tables should the profit-maximizing carpenter make, it did not make sense to come up with an answer like “three and one half chairs.” Maybe the carpenter is talented enough to make half a chair (using half the resources needed to make the entire chair), but probably she wouldn’t be able to sell half a chair for half the price of a whole chair. So, sometimes it makes sense to add to a problem the additional constraint that some (or all) of the variables must take on integer values. This leads to the basic formulation. Given c = (c1, . , cn), b = (b1, . , bm), A a matrix with m rows and n columns (and entry aij in row i and column j), and I a subset of {1, . , n}, find x = (x1, . , xn) max c · x subject to Ax ≤ b, x ≥ 0, xj is an integer whenever j ∈ I. (1) What is new? The set I and the constraint that xj is an integer when j ∈ I. Everything else is like a standard linear programming problem. I is the set of components of x that must take on integer values. If I is empty, then the integer programming problem is a linear programming problem. If I is not empty but does not include all of {1, . , n}, then sometimes the problem is called a mixed integer programming problem.
    [Show full text]
  • Integer Linear Programming
    Introduction Linear Programming Integer Programming Integer Linear Programming Subhas C. Nandy ([email protected]) Advanced Computing and Microelectronics Unit Indian Statistical Institute Kolkata 700108, India. Introduction Linear Programming Integer Programming Organization 1 Introduction 2 Linear Programming 3 Integer Programming Introduction Linear Programming Integer Programming Linear Programming A technique for optimizing a linear objective function, subject to a set of linear equality and linear inequality constraints. Mathematically, maximize c1x1 + c2x2 + ::: + cnxn Subject to: a11x1 + a12x2 + ::: + a1nxn b1 ≤ a21x1 + a22x2 + ::: + a2nxn b2 : ≤ : am1x1 + am2x2 + ::: + amnxn bm ≤ xi 0 for all i = 1; 2;:::; n. ≥ Introduction Linear Programming Integer Programming Linear Programming In matrix notation, maximize C T X Subject to: AX B X ≤0 where C is a n≥ 1 vector |- cost vector, A is a m× n matrix |- coefficient matrix, B is a m × 1 vector |- requirement vector, and X is an n× 1 vector of unknowns. × He developed it during World War II as a way to plan expenditures and returns so as to reduce costs to the army and increase losses incurred by the enemy. The method was kept secret until 1947 when George B. Dantzig published the simplex method and John von Neumann developed the theory of duality as a linear optimization solution. Dantzig's original example was to find the best assignment of 70 people to 70 jobs subject to constraints. The computing power required to test all the permutations to select the best assignment is vast. However, the theory behind linear programming drastically reduces the number of feasible solutions that must be checked for optimality.
    [Show full text]
  • IEOR 269, Spring 2010 Integer Programming and Combinatorial Optimization
    IEOR 269, Spring 2010 Integer Programming and Combinatorial Optimization Professor Dorit S. Hochbaum Contents 1 Introduction 1 2 Formulation of some ILP 2 2.1 0-1 knapsack problem . 2 2.2 Assignment problem . 2 3 Non-linear Objective functions 4 3.1 Production problem with set-up costs . 4 3.2 Piecewise linear cost function . 5 3.3 Piecewise linear convex cost function . 6 3.4 Disjunctive constraints . 7 4 Some famous combinatorial problems 7 4.1 Max clique problem . 7 4.2 SAT (satisfiability) . 7 4.3 Vertex cover problem . 7 5 General optimization 8 6 Neighborhood 8 6.1 Exact neighborhood . 8 7 Complexity of algorithms 9 7.1 Finding the maximum element . 9 7.2 0-1 knapsack . 9 7.3 Linear systems . 10 7.4 Linear Programming . 11 8 Some interesting IP formulations 12 8.1 The fixed cost plant location problem . 12 8.2 Minimum/maximum spanning tree (MST) . 12 9 The Minimum Spanning Tree (MST) Problem 13 i IEOR269 notes, Prof. Hochbaum, 2010 ii 10 General Matching Problem 14 10.1 Maximum Matching Problem in Bipartite Graphs . 14 10.2 Maximum Matching Problem in Non-Bipartite Graphs . 15 10.3 Constraint Matrix Analysis for Matching Problems . 16 11 Traveling Salesperson Problem (TSP) 17 11.1 IP Formulation for TSP . 17 12 Discussion of LP-Formulation for MST 18 13 Branch-and-Bound 20 13.1 The Branch-and-Bound technique . 20 13.2 Other Branch-and-Bound techniques . 22 14 Basic graph definitions 23 15 Complexity analysis 24 15.1 Measuring quality of an algorithm .
    [Show full text]
  • A Branch-And-Bound Algorithm for Zero-One Mixed Integer
    A BRANCH-AND-BOUND ALGORITHM FOR ZERO- ONE MIXED INTEGER PROGRAMMING PROBLEMS Ronald E. Davis Stanford University, Stanford, California David A. Kendrick University of Texas, Austin, Texas and Martin Weitzman Yale University, New Haven, Connecticut (Received August 7, 1969) This paper presents the results of experimentation on the development of an efficient branch-and-bound algorithm for the solution of zero-one linear mixed integer programming problems. An implicit enumeration is em- ployed using bounds that are obtained from the fractional variables in the associated linear programming problem. The principal mathematical result used in obtaining these bounds is the piecewise linear convexity of the criterion function with respect to changes of a single variable in the interval [0, 11. A comparison with the computational experience obtained with several other algorithms on a number of problems is included. MANY IMPORTANT practical problems of optimization in manage- ment, economics, and engineering can be posed as so-called 'zero- one mixed integer problems,' i.e., as linear programming problems in which a subset of the variables is constrained to take on only the values zero or one. When indivisibilities, economies of scale, or combinatoric constraints are present, formulation in the mixed-integer mode seems natural. Such problems arise frequently in the contexts of industrial scheduling, investment planning, and regional location, but they are by no means limited to these areas. Unfortunately, at the present time the performance of most compre- hensive algorithms on this class of problems has been disappointing. This study was undertaken in hopes of devising a more satisfactory approach. In this effort we have drawn on the computational experience of, and the concepts employed in, the LAND AND DoIGE161 Healy,[13] and DRIEBEEKt' I algorithms.
    [Show full text]
  • Additional Exercises for Convex Optimization
    Additional Exercises for Convex Optimization Stephen Boyd Lieven Vandenberghe January 4, 2020 This is a collection of additional exercises, meant to supplement those found in the book Convex Optimization, by Stephen Boyd and Lieven Vandenberghe. These exercises were used in several courses on convex optimization, EE364a (Stanford), EE236b (UCLA), or 6.975 (MIT), usually for homework, but sometimes as exam questions. Some of the exercises were originally written for the book, but were removed at some point. Many of them include a computational component using one of the software packages for convex optimization: CVX (Matlab), CVXPY (Python), or Convex.jl (Julia). We refer to these collectively as CVX*. (Some problems have not yet been updated for all three languages.) The files required for these exercises can be found at the book web site www.stanford.edu/~boyd/cvxbook/. You are free to use these exercises any way you like (for example in a course you teach), provided you acknowledge the source. In turn, we gratefully acknowledge the teaching assistants (and in some cases, students) who have helped us develop and debug these exercises. Pablo Parrilo helped develop some of the exercises that were originally used in MIT 6.975, Sanjay Lall developed some other problems when he taught EE364a, and the instructors of EE364a during summer quarters developed others. We’ll update this document as new exercises become available, so the exercise numbers and sections will occasionally change. We have categorized the exercises into sections that follow the book chapters, as well as various additional application areas. Some exercises fit into more than one section, or don’t fit well into any section, so we have just arbitrarily assigned these.
    [Show full text]
  • Efficient Convex Quadratic Optimization Solver for Embedded MPC Applications
    DEGREE PROJECT IN ELECTRICAL ENGINEERING, SECOND CYCLE, 30 CREDITS STOCKHOLM, SWEDEN 2018 Efficient Convex Quadratic Optimization Solver for Embedded MPC Applications ALBERTO DÍAZ DORADO KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE Efficient Convex Quadratic Optimization Solver for Embedded MPC Applications ALBERTO DÍAZ DORADO Master in Electrical Engineering Date: October 2, 2018 Supervisor: Arda Aytekin , Martin Biel Examiner: Mikael Johansson Swedish title: Effektiv Konvex Kvadratisk Optimeringslösare för Inbäddade MPC-applikationer School of Electrical Engineering and Computer Science iii Abstract Model predictive control (MPC) is an advanced control technique that requires solving an optimization problem at each sampling instant. Sev- eral emerging applications require the use of short sampling times to cope with the fast dynamics of the underlying process. In many cases, these applications also need to be implemented on embedded hard- ware with limited resources. As a result, the use of model predictive controllers in these application domains remains challenging. This work deals with the implementation of an interior point algo- rithm for use in embedded MPC applications. We propose a modular software design that allows for high solver customization, while still producing compact and fast code. Our interior point method includes an efficient implementation of a novel approach to constraint softening, which has only been tested in high-level languages before. We show that a well conceived low-level implementation of integrated constraint softening adds no significant overhead to the solution time, and hence, constitutes an attractive alternative in embedded MPC solvers. iv Sammanfattning Modell prediktiv reglering (MPC) är en avancerad regler-teknik som involverar att lösa ett optimeringsproblem vid varje sampeltillfälle.
    [Show full text]
  • 1 Linear Programming
    ORF 523 Lecture 9 Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Any typos should be emailed to a a [email protected]. In this lecture, we see some of the most well-known classes of convex optimization problems and some of their applications. These include: • Linear Programming (LP) • (Convex) Quadratic Programming (QP) • (Convex) Quadratically Constrained Quadratic Programming (QCQP) • Second Order Cone Programming (SOCP) • Semidefinite Programming (SDP) 1 Linear Programming Definition 1. A linear program (LP) is the problem of optimizing a linear function over a polyhedron: min cT x T s.t. ai x ≤ bi; i = 1; : : : ; m; or written more compactly as min cT x s.t. Ax ≤ b; for some A 2 Rm×n; b 2 Rm: We'll be very brief on our discussion of LPs since this is the central topic of ORF 522. It suffices to say that LPs probably still take the top spot in terms of ubiquity of applications. Here are a few examples: • A variety of problems in production planning and scheduling 1 • Exact formulation of several important combinatorial optimization problems (e.g., min-cut, shortest path, bipartite matching) • Relaxations for all 0/1 combinatorial programs • Subroutines of branch-and-bound algorithms for integer programming • Relaxations for cardinality constrained (compressed sensing type) optimization prob- lems • Computing Nash equilibria in zero-sum games • ::: 2 Quadratic Programming Definition 2. A quadratic program (QP) is an optimization problem with a quadratic ob- jective and linear constraints min xT Qx + qT x + c x s.t. Ax ≤ b: Here, we have Q 2 Sn×n, q 2 Rn; c 2 R;A 2 Rm×n; b 2 Rm: The difficulty of this problem changes drastically depending on whether Q is positive semidef- inite (psd) or not.
    [Show full text]