Solving Mixed-Integer Quadratic Programming Problems with IBM-CPLEX: a Progress Report
Total Page:16
File Type:pdf, Size:1020Kb
Proceedings of the Twenty-Sixth RAMP Symposium Hosei University, Tokyo, October 16-17, 2014 Solving Mixed-Integer Quadratic Programming problems with IBM-CPLEX: a progress report 1 2 3 Christian Bliek ú, Pierre Bonami †, and Andrea Lodi ‡ Abstract Mixed-Integer Quadratic Programming problems have a vast impact in both theory and practice of mathematical optimization. Classical algorithmic approaches, their implemen- tation within IBM-CPLEX and new algorithmic advances will be discussed. Keywords Quadratic Programming, branch and bound, convex programming, bound reduc- tion 1. Introduction We consider the general Mixed-Integer Quadratic Programming (MIQP) problem 1 T T min 2 x Qx + c x (1) Ax = b (2) xj Z j =1,...,p (3) œ l x u (4) Æ Æ n n where Q ◊ is a symmetric matrix. Note that constraints (4) are fundamental in the complex- ity sense because a striking result by Jeroslow [11] proves the undecidability of unbounded problems. The relevance of MIQP in Mathematical Optimization is twofold. On the one side, a number of applications arise in a practical setting starting from the most classical one in portfolio optimization, see, e.g., [4, 7, 17]. On the other side, MIQP has been clearly the first step for a methodological generalization of Mixed-Integer Linear Programming (MILP) to general Mixed-Integer Nonlinear Programming (MINLP). It is well known that MIQP is NP-hard, trivially because it contains MILP as a special case. However, it is important to notice that, differently from MILP, the source of complexity of MIQP is not restricted to the integrality requirement on (some of) the variables (3). In fact, the (continuous) Quadratic Programming (QP) special case (i.e., p =0) is, in general, NP-hard as well. Let G =(N,E) be a graph and Q be the incidence matrix of G. The 1 CPLEX Optimization, IBM France, 1681-HB2 Route des Dolines 06560 Valbonne, France 2 CPLEX Optimization, IBM Spain, Sta. Hortensia 26-28, 28002 - Madrid (M), Spain 3 DEI, University of Bologna, and IBM-Unibo Center of Excellence in Mathematical Optimization, Viale Risorgimento 2, 40136, Bologna, Italy E-mail address: [email protected] ú E-mail address: [email protected] † E-mail address: [email protected] ‡ - 171 - The Twenty-Sixth RAMP Symposium optimal value of 1 n min xT Qx : x =1,x 0 Y2 j Ø Z ] jÿ=1 ^ is 1 1 1 , where ‰(G) is the[ clique number of G [14], which\ is well-known to be NP-hard 2 ≠ ‰(G) to compute.1 2 Therefore, we will distinguish between the case in which the relaxation obtained by drop- ping the integrality requirements (3) (if any) is convex (thus, solvable in polynomial time), and that in which it is nonconvex. Roughly speaking, convex MIQPs are currently solved by heavily relying on MILP techniques by either replacing in the classical branch-and-bound scheme the Linear Programming (LP) solver with a QP solver, or solving MILPs as subprob- lems. Instead, nonconvex MIQPs require somehow more sophisticated techniques, namely Global Optimization (GO). An intermediate case between these two is the one where Q is not convex but MIQP can easily be reformulated as a convex MIQP. This is true for example when all variables are constrained to be 0-1, or more generally when all products are between a 0-1 variable and a bounded variable. In this short paper we review the IBM-CPLEX evolution in solving QPs and MIQPs with special emphasis on nonconvex MIQPs because the capability of solving them to proven op- timality has been added into the solver very recently. Schematically, Table 1 below outlines a brief history of MIQP within CPLEX, where B&B stands for branch and bound. We remind the reader that a QP is convex if and only if the matrix Q is positive semidefinite. class pQalgorithm V. (Year) convex QP 0 0 barrier 4.0 (1995) ≤ – – – QP simplex 8.0 (2002) convex MIQP > 0 0 B&B 8.0 (2002) ≤ nonconvex QP 0 0 barrier (local) 12.3 (2011) ”≤ – – – spatial B&B (global) 12.6 (2013) nonconvex MIQP > 0 0 spatial B&B (global) 12.6 (2013) ”≤ Table 1 History of MIQP within CPLEX. The paper is organized as follows. In Section 2 we briefly discuss the classical algorithms for convex MIQPs and some specific implementation choices that are peculiar of CPLEX. In Section 3 we move to the nonconvex MIQP case and we present the current status of implementation of solution techniques for them. Both Sections 2 and 3 end with some com- putational experiments giving a snapshot of the current performance of CPLEX on MIQPs. Since the algorithmic treatment of 0-1 nonconvex MIQPs is more similar to that of convex MIQPs than nonconvex MIQPs, we will review them as part of the section on convex MIQPs. Finally, in Section 4 we draw some conclusions. - 172 - The Twenty-Sixth RAMP Symposium 2. Convex MIQPs The solution of convex MINLPs has reached in the last decade a rather stable and mature algorithmic technology, see, e.g., Bonami et al. [6]. Essentially, the two corner stones of algorithms and software are Nonlinear Programming (NLP)-based branch-and-bound [10] and Outer Approximation (OA) [8] algorithms. Although the NLP-based branch and bound is the only relevant for CPLEX (see discussion at the end of the section) for the special case of (convex) MIQPs, we review the OA as well. This is because the most standard trick in MINLP is to add a variable – R, replace the original objective function with œ “min –” and add the constraint f(x) –. Of course, this could be done for MIQPs as 1 T T Æ well (f(x)= 2 x Qx + c x in that case), thus transforming the problem in a convex Mixed- Integer Quadratically-Constrained Programming (MIQCP) problem and sometimes it does make sense in practice [9]. The NLP-based branch and bound is a straightforward generalization of the main enu- merative algorithm for MILP. The main difference is that an NLP, a QP in our special case, is solved at every node of the branch-and-bound tree to provide a valid lower bound, in- stead of an LP problem. The other main components of the MILP scheme remain the same: branching on the integer constrained variables, and pruning nodes based on • infeasibility: the node relaxation is infeasible, • bound: the node lower bound value is not smaller than the incumbent solution value, and • integer feasibility: the node relaxation admits an integer solution. The main source of inefficiency of this straightforward extension is, in general convex MINLP, the difficulty of warm starting NLP solvers, i.e., reusing the information on the solution of a relaxation from a node to another. However, for the convex MIQP case this issue is relatively under control when the QP simplex is used to solve the node relaxation. The basic idea of the OA decomposition is to take first-order approximations of constraints at different points and build an MILP equivalent to the initial MINLP. This corresponds for a nonlinear function g(x) and for a set K of its points xk,k K to write the constraints œ g(xk)+ g(xk)T (x xk) 0.k K (5) Ò ≠ Æ œ This basic idea is illustrated in Figure 1. It is easy to see that the MILP constructed through an initial set of linearization points is a relaxation of the original convex MINLP. Then, if solved to optimality, it provides a valid lower bound value for the original problem and a solution xˆ such that xˆj Z,j =1,...,p. œ However, this solution might be NLP infeasible (because of the first-order approximation used) and the NLP(ˆx) obtained by fixing the integer component of the original problem (xj =ˆxj,j =1,...,p) is solved. If NLP(ˆx) is feasible, then it in turn provides an upper bound value. Otherwise, general-purpose NLP software will typically return a weighted min- K +1 imization of the violation of the constraints. In both cases, a new point x| | can be added - 173 - The Twenty-Sixth RAMP Symposium Outer Approximation [Duran and Grossmann, 1986] min cT x s.t. g (x) 0 i = 1,...,m, i ≤ xj Z j = 1,...,p. ∈ Figure 1 The Outer Approximation MILP equivalent. Idea: Take first-order approximations of constraints at different points to the set K, and the methodand is iterated build by an applying equivalent first-order MILP. approximation to it, thus en- hancing the quality of the MILP equivalent. Duran and Grossman [8] show that the iterative T scheme converges to the optimal solutionmin c valuex of the original problem in a finite number of steps, provided the nonlinear functionss.t. are convex and continuously differentiable, and that a constraint qualification holds for each point xk,k K. g (x k )+ g (xœk )T (x x k ) 0 i = 1,...,m, k = 1,...,K The OA algorithm outlined abovei requires∇ solvingi one MILP− at≤ each iteration. This can be rather time consuming if the initialxj Z MILP does not provide a good approximation of the j = 1,...,p. ∈ original problem. However, the OA can be embedded in a single tree search [15, 6, 1]. Namely, start solving the same initial MILP by branch and bound and at each integer feasible node: 12 Andrea Lodi, University of Bologna (i) Solve NLP(ˆx), and enrich the set of linearization points, (ii) Resolve the LP relaxation of the node with the new fist-order constraints, and (iii) Repeat as long as node is integer feasible. Of course, no pruning is allowed by integer feasibility.