General Nonlinear Programming (NLP) Software
CAS 737 / CES 735
Kristin Davies Hamid Ghaffari Alberto Olvera-Salazar Voicu Chis
January 12, 2006 Outline
Intro to NLP
Examination of:
IPOPT
PENNON
CONOPT
LOQO
KNITRO
Comparison of Computational Results
Conclusions Intro to NLP The general problem: xf )(min =∈= {},...,1,0)(.. pIixhts i (NLP) j =∈≤ {},...,1,0)( mJjxg ∈Cx n n , ℜ⊆ℜ∈ ,, ghfandsetcertainaisCxwhere ji are functions Condefined .
Either the objective function or some of the constraints may be nonlinear Intro to NLP (cont’d…)
Recall:
The feasible region of any LP is a convex set
if the LP has an optimal solution, there is an extreme point of the feasible set that is optimal However:
even if the feasible region of an NLP is a convex set, the optimal solution might not be an extreme point of the feasible region Intro to NLP (cont’d…)
Some Major approaches for NLP
Interior Point Methods
Use a log-barrier function
Penalty and Augmented Lagrange Methods
Use the idea of penalty to transform a constrained problem into a sequence of unconstrained problems.
Generalized reduced gradient (GRG)
Use a basic Descent algorithm.
Successive quadratic programming (SQP)
Solves a quadratic approximation at every iteration. Summary of NLP Solvers
NLP
Augmented Interior Point Reduced Gradient Lagrangian Methods Methods Methods
PENNON KNITRO (TR) CONOPT IPOPT, LOQO (line search) IPOPT SOLVER (Interior Point OPTimizer)
Creators
Andreas Wachter and L.T. Biegler at CMU (~2002)
Aims
Solver for Large-Scale Nonlinear Optimization problems
Applications
General Nonlinear optimization
Process Engineering, DAE/PDE Systems, Process Design and Operations, Nonlinear Model Predictive control, Design Under Uncertainty IPOPT SOLVER (Interior Point OPTimizer)
Input Format
Can be linked to Fortran and C code MATLAB and AMPL. Language / OS
Fortran 77, C++ (Recent Version IPOPT 3.x)
Linux/UNIX platforms and Windows Commercial/Free
Released as open source code under the Common Public License (CPL).
It is available from the COIN-OR repository IPOPT SOLVER (Interior Point OPTimizer)
Key Claims
Global Convergence by using a Line Search.
Find a KKT point
Point that Minimizes Infeasibility (locally)
Exploits Exact Second Derivatives
AMPL (automatic differentiation)
If not Available use QN approx (BFGS)
Sparsity of the KKT matrix.
IPOPT has a version to solve problems with MPEC Constraints. (IPOPT-C) IPOPT SOLVER (Interior Point OPTimizer)
Algorithm
Interior Point method with a novel line search filter.
xf )(min i)( n ϕ xfx −= μ x )log()()(min x ℜ∈ μl l ∑ x ℜ∈ n xcts = 0)(.. i xcts = 0)(.. x ≥ 0
The bounds are replaced by a logarithmic Barrier term. The method solves a sequence of barrier
problems for decreasing values of μl IPOPT SOLVER (Interior Point OPTimizer)
Algorithm
(For a fixed value of μ l )
Solve the Barrier Problem
Search Direction (Primal-Dual IP)
Use a Newton method to solve the primal dual equations.
Hessian Approximation (BFGS update)
Line Search (Filter Method)
Feasibility Restoration Phase IPOPT SOLVER (Interior Point OPTimizer)
Optimization Problem xf )(min n Outer x ℜ∈ Outer xcts = 0)(.. LoopLoop x ≥ 0
The bounds are replaced by a logarithmic Barrier term.
The method solves a sequence of barrier problems for
decreasing values of μl ϕ xfx −= μ x i)( )log()()(min μl l ∑ n x ℜ∈ i xcts = 0)(.. IPOPT SOLVER (Interior Point OPTimizer) μ Algorithm (For a fixed value of l )
Solve the Barrier Problem
Search Direction (Primal-Dual IP)
Use a Newton method to solve the primal dual equations
Hessian Approximation (BFGS update) IPOPT SOLVER (Interior Point OPTimizer)
Inner Inner ϕ xfx −= μ x i)( )log()()(min μl l ∑ Loop n Loop x ℜ∈ i Barrier NLP xcts = 0)(.. NLP
Optimality conditions At a Newton's iteration (xk,λk,vk) λ vxcxf =−∇+∇ 0)()(
μeXVe =− 0 ⎡ )( −∇ IxcH ⎤⎡d x ⎤ ⎡ ∇+∇ )()( T λ − vxcxf ⎤ H kδ ∇+Σ+ kxcI )( ⎡ x ⎤ k ⎡ k Tk ⎤ kk ⎡ ⎢ Hkk T k ⎤ dk⎥⎢ λ ⎥ ϕ⎢μ k ∇+∇ k )()( λ − vxcx kk ⎥ ∇ xc T ⎢00)( ⎥d −= ⎢ −= xc )( ⎥ xc = 0)( ⎢ ∇⎢ xc )( k −δ I ⎥ λ⎥⎢ k ⎥ ⎢ k ⎥ ⎣ k c ⎦⎣dk ⎦ v ⎣ xc k )( ⎦ ⎢ X 0 V ⎥⎢d ⎥ ⎢ − μeeVX ⎥ Dual Variables ⎣ k k ⎦⎣ k ⎦ ⎣ kk ⎦ −1 = μ eXv Algorithm Core:2 Solution of this Linear system Algorithm Core:∇= SolutionxLH λkkxxk ),( of this Linear system IPOPT SOLVER (Interior Point OPTimizer)
Algorithm (For a fixed value of μ l ) Line Search (Filter Method)
A trial point is accepted if improves feasibility or if improves the barrier function
x []α kk )( ≤ [xcxc k ] +1 += α dxx kkkk If v or +1 += α dvv kkkk [][]x kk )( ≤ ϕαϕxk
Assumes Newton directions are “Good” especially when using Exact 2nd Derivatives IPOPT SOLVER (Interior Point OPTimizer)
Line Search - Feasibility Restoration Phase
When a new trial point does not provides sufficient improvement.
Restore Feasibility Force Unique Solution Minimize constraint Find closest feasible point. violation Add Penalty function 2 2 min − xx k 2 xc )(min n 2 x ℜ∈ x ℜ∈ n xcts = 0)(.. xts ≥ 0.. x ≥ 0 IPOPT SOLVER (Interior Point OPTimizer)
The complexity of the problem increases when complementarity conditions are introduced from: ywxf ),,(min ywxf ),,(min ,, ywx ℜ∈ℜ∈ℜ∈ mmn ,, ywx ℜ∈ℜ∈ℜ∈ mmn n m m st. − μ()ln()x i)( + ln w i)( + (ln y )i)( () i=1 i=1 ∑∑∑ i=1 ywxc = 0),,( st. ywx ≥ 0,, ywxc = 0),,( ii )()( ii )()( == 1,0 Kmiyw == 1,0 Kmiyw •The interior Point method for NLPs has been extended to handle complementarity problems. (Raghunathan et al. 2003). syw iii )()()( =+ δμ yw ii )()( = 0 is relaxed as yw ii )()( ≤ δμ s i)( ≥ 0 IPOPT SOLVER (Interior Point OPTimizer)
Additional
IPOPT 3x. Is now programmed in C++.
Is the primary NLP Solver in an undergoing project for MINLP with IBM.
References
Ipopt homepage: http://www.coin-or.org/Ipopt/ipopt-fortran.html
A. Wächter and L. T. Biegler, On the Implementation of a Primal-Dual Interior Point Filter Line Search Algorithm for Large-Scale Nonlinear Programming, Research Report, IBM T. J. Watson Research Center, Yorktown, USA, (March 2004 - accepted for publication in Mathematical Programming) PENNON (PENalty method for NONlinear & semidefinite programming)
Creators
Michal Kocvara & Michael Stingl (~2001) Aims
NLP, Semidefinite Programming (SDP), Linear & Bilinear Matrix Inequalities (LMI & BMI), Second Order Conic Programming (SOCP) Applications
General purpose nonlinear optimization, systems of equations, control theory, economics & finance, structural optimization, engineering SDP (SemiDefinite Programming)
Minimization of a linear function subject to the constraint that an affine combination of symmetric matrices is positive semidefinite min T xc Linear Matrix Inequality (LMI) xFts ≥ 0)(.. defines a convex constraint on x m )( 0 += ∑ FxFxFwhere ii i=1
m + symmetric1 0 FF m ),...,(matrices SDP (SemiDefinite Programming)
-always an optimal point on the boundary
-boundary consists of piecewise algebraic surfaces SOCP (Second-Order Conic Programming)
Minimization of a linear function subject to a second-order cone constraint min T xc T .. iii +≤+ dxcbxAts i
Called a second-order cone constraint since the unit second-order cone of dimension k is defined as:
⎧⎡u⎤ k−1 ⎫ Which is called the Ck = ⎨ ,, ≤∈∈ tuRtRu ⎬ ⎢t ⎥ quadratic, ice-cream, ⎩⎣ ⎦ ⎭ or Lorentz cone PENNON (PENalty method for NONlinear & semidefinite programming)
Input Format
MATLAB function, routine called from C or Fortran, stand-alone program with AMPL Language
Fortran 77 Commercial/Free
Variety of licenses ranging from Academic – single user ($460 CDN) to Commercial – company ($40,500 CDN) PENNON (PENalty method for NONlinear & semidefinite programming)
Key Claims st 1 available code for combo NLP, LMI, & BMI constraints
Aimed at (very) large-scale problems
Efficient treatment of different sparsity patterns in problem data
Robust with respect to feasibility of initial guess
Particularly efficient for large convex problems PENNON (PENalty method for NONlinear & semidefinite programming)
Algorithm
Generalized version of the Augmented Langrangian method (originally by Ben-Tal & Zibulevsky) = ,...,1 mi Augmented Problem g m = of# inequality sconstraint xf )(min g
pi => penalty0 parameter ϕ ()pxgpts iigi ≤ 0/)(.. ϕg = penalty function u = Lagrange multiplier Augmented Lagrangian i mg )(),,( += ∑ ϕ ()/)( pxgpuxfpuxF iigii i=1 PENNON (PENalty method for NONlinear & semidefinite programming) The Algorithm
Consider only inequality constraints from (NLP)
Based on choice of a penalty function, φg, that penalizes the inequality constraints
Penalty function must satisfy multiple properties such that the original (NLP) has the same solution as the following “augmented” problem: ,)(min xxf ∈ℜ ϕ ()=≤ ,...1,0/)(.. mipxgpts iigi g (NLPφ) pwith > 0 i [3] Kocvara & Stingl PENNON (PENalty method for NONlinear & semidefinite programming) The Algorithm (Cont’d…)
The Lagrangian of (NLPφ) can be viewed as a (generalized) augmented Lagrangian of (NLP):
mg )(),,( += ∑ ϕ ()/)( pxgpuxfpuxF iigii i=1 Inequality constraint Penalty parameter Lagrange multiplier Penalty function
[3] Kocvara & Stingl PENNON (PENalty method for NONlinear & semidefinite programming) The Algorithm STEPS
1 1 1 )0( i i => mipLetgivenbeuandxLet g .,...,1,0.
kFor = ,...2,1 stoppingauntilrepeat criterium satisfiedis .
k +1 +1 kkk )( ∇ x ( ,, ) ≤ KpuxFthatsuchxFindi
k +1 k ' k +1 k )( i = i ϕ ()ig ()i = ,...,1,/ mipxguuii g k +1 k i i =< ,...,1,)( mippiii g [3] Kocvara & Stingl PENNON (PENalty method for NONlinear & semidefinite programming) The Algorithm STEPS
1 1 1 )0( i i => mipLetgivenbeuandxLet g .,...,1,0.
Initialization
Can start with an arbitrary primal variable x , therefore, choose x1 = 0 1 Calculate initial multiplier values ui
Initial p= π 0 , typically between 10 - 10000
[3] Kocvara & Stingl PENNON (PENalty method for NONlinear & semidefinite programming)
The Algorithm STEPS
k+1 +1 kkk )( ∇ x ( ,, )≤ KpuxFthatsuchxFindi
(Approximate) Unconstrained Minimization
Performed either by Newton with Line Search, or by Trust Region xk+1 = puxF kk ),,(minarg x −1 Stopping criteria: ψ = ϕ )'(' ∇ ()+1 ,, puxF kkk ≤ α or α = 1.0 x 2 ∇ ()+1 ,, kkk k −⋅≤ kψα' ()()k +1 / k orpxguupuxF x 2 i i ig i 2 +1 kkk kkk ∇ x (),, puxF −1 α ∇⋅≤ x (),, puxF −1 H H [3] Kocvara & Stingl PENNON (PENalty method for NONlinear & semidefinite programming)
The Algorithm STEPS
k +1 k ' k+1 k )( i = i ϕ ( ig ( i )) = ,...,1,/ mipxguuii g
Update of Multipliers
Restricted in order to satisfy: u k +1 1 i with a positive μ ≤ typically 5.0,1 μ k << ui μ
new If left-side violated, let ui = μ new If right side violate, let u = /1 μ i [3] Kocvara & Stingl PENNON (PENalty method for NONlinear & semidefinite programming) The Algorithm STEPS
k+1 k i i =< ,...,1,)( mippiii g
Update of Penalty Parameter
No update during first 3 iterations
Afterwards, updated by a constant factor dependent on initial penalty parameter -6 Penalty update is stopped if peps (10 ) is reached [3] Kocvara & Stingl PENNON (PENalty method for NONlinear & semidefinite programming) The Algorithm
Choice of Penalty Function
Most efficient penalty function for convex NLP is the quadratic-logarithmic function:
1 2 ϕg )( 1 2 32 ≥++= rtctctct rwhere −∈ )1,1( and
4 )log( 65 <+− i = 6,...,1 thatsocrtcctc properties hold
[4] Ben-Tal & Zibulevsky PENNON (PENalty method for NONlinear & semidefinite programming) The Algorithm
Overall Stopping Criteria
k − kk puxFxf ),,()( k − xfxf k −1)()( < ε or < ε + xf k )(1 + xf k )(1
where ε =10−7
[3] Kocvara & Stingl PENNON (PENalty method for NONlinear & semidefinite programming)
Assumptions / Warnings
More tuning for nonconvex problems is still required
Slower at solving linear SDP problems since algorithm is generalized PENNON (PENalty method for NONlinear & semidefinite programming)
References
Kocvara, Michal & Michael Stingl. PENNON: A Code for Convex and Semidefinite Programming. Optimization Methods and Software, 8(3):317-333, 2003.
Kocvara, Michal & Michael Stingl. PENNON-AMPL User’s Guide. www.penopt.com . August 2003.
Ben-Tal, Aharon & Michael Zibulevsky. Penalty/Barrier Multiplier Methods for Convex Programming Problems. Siam J. Optim., 7(2):347-366, 1997.
Pennon Homepage. www.penopt.com/pennon.html Available online January 2007.