Sequential Quadratic Programming for Equality Constraints

Sequential Quadratic Programming for Equality Constraints

Step Computation for Nonlinear Optimization GIAN Short Course on Optimization: Applications, Algorithms, and Computation Sven Leyffer Argonne National Laboratory September 12-24, 2016 Outline 1 Methods for Nonlinear Optimization: Introduction 2 Convergence Test and Termination Conditions Feasible Stationary Points. Infeasible Stationary Points. 3 Sequential Quadratic Programming for Equality Constraints 4 SQP for General Problems 5 Sequential Linear and Quadratic Programming 2 / 29 Methods for Nonlinear Optimization: Introduction Nonlinear Program (NLP) of the form minimize f (x) x subject to c(x) = 0 x ≥ 0; where n objective f : R ! R twice continuously differentiable n m constraints c : R ! R twice continuously differentiable y multipliers of c(x) = 0 z ≥ 0 multipliers of x ≥ 0. ... can reformulate more general NLPs easily 3 / 29 General Framework for NLP Solvers Cannot solve NLP directly or explicitly: minimize f (x) subject to c(x) = 0; x ≥ 0 x ) approximate NLP & generate sequence fx(k)g Each approximate problem can be solved inexactly. Refine approximation, if no progress made Framework for Nonlinear Optimization Methods (0) (0) (0) n+m+n Given (x ; y ; z ) 2 R , set k = 0 while x(k) is not optimal do repeat Approx. solve/refine an approximation of NLP around x(k). until Improved solution x(k+1) is found; Check whether x(k+1) is optimal; set k = k + 1. end 4 / 29 General Framework for NLP Solvers Basic Components of NLP Methods 1 convergence test checks for optimal solutions or detects failure 2 approximate subproblem computes improved new iterate 3 globalization strategy ensures convergence from remote x(0) 4 globalization mechanism truncates steps by local model ... enforce globalization strategy, refining local model Categorize Algorithms by Components ... get interplay right! Notation Iterates x(k); k = 1; 2;::: Function values, f (k) = f (x(k)) and c(k) = c(x(k)) Gradients g (k) = rf (x(k)) and Jacobian A(k) = rc(x(k)) Hessian of the Lagrangian is H(k). 5 / 29 Outline 1 Methods for Nonlinear Optimization: Introduction 2 Convergence Test and Termination Conditions Feasible Stationary Points. Infeasible Stationary Points. 3 Sequential Quadratic Programming for Equality Constraints 4 SQP for General Problems 5 Sequential Linear and Quadratic Programming 6 / 29 Convergence Test for Feasible Limits Check optimality & provide local improvement model Approximate Convergence Test, from KKT Conditions kc(k)k ≤ , kg (k) − A(k)y (k) − z(k)k ≤ , and k min(x(k); z(k))k ≤ where > 0 is tolerance (k) (k) (k) (k) (k) (k) Note: min(x ; z ) = 0 , x ≥ 0; z ≥ 0 xi zi = 0 If constraints do not satisfy MFCQ then may only get kA(k)y (k) + z(k)k ≤ , approx. Fritz-John Point ... equivalent to setting \objective multiplier" y0 = 0 7 / 29 Convergence Test for Feasible Limits Unless NLP is convex, cannot guarantee to find feasible point! minimize f (x) subject to c(x) = 0; x ≥ 0 x Proving that NLP is infeasible is hard Only realistic to consider local feasibility problem minimize kc(x)k subject to x ≥ 0; x P formulate as smooth problem, e.g. `1 norm, kvk1 = jvi j m X minimize s+ + s− , x i i i=1 subject to s+ − s− = c(x); x ≥ 0; s+; s− ≥ 0 8 / 29 Approximate Optimality for Infeasible NLPs Local feasibility problem minimize kc(x)k subject to x ≥ 0; x Approximate Infeasible First-Order Stationary Point kA(k)y (k) − z(k)k ≤ and k min(x(k); z(k))k ≤ , where y (k) multipliers/weights corresponding to feasibility norm Example: For `1 norm (verify using KKT conditions!) (k) (k) If [c ]i < 0 then [y ]i = −1 (k) (k) If [c ]i > 0 then [y ]i = 1 (k) −1 ≤ [y ]i ≤ 1 otherwise. By-product of solving the local model ... related to dual norm `1 9 / 29 Outline 1 Methods for Nonlinear Optimization: Introduction 2 Convergence Test and Termination Conditions Feasible Stationary Points. Infeasible Stationary Points. 3 Sequential Quadratic Programming for Equality Constraints 4 SQP for General Problems 5 Sequential Linear and Quadratic Programming 10 / 29 SQP for Equality Constraints Sequential Quadratic Programming (SQP) for Equality Constraints minimize f (x) x subject to c(x) = 0: SQP is Newton's method applied to KKT conditions Let Lagrangian, L(x; y) = f (x) − y T c(x), apply Newton's method r L(x; y) g(x) − A(x)y x = 0 , = 0: ry L(x; y) −c(x) Let's quickly recall Newton's method ... 11 / 29 Newton's Method for Nonlinear Equations Solve F (x) = 0: Get approx. x(k+1) of solution of F (x) = 0 by solving linear model about x(k): F (x(k)) + rF (x(k))T (x − x(k)) = 0 for k = 0; 1;::: Theorem (Newton's Method) If F 2 C2, and rF (x∗) nonsingular, then Newton converges quadratically near x∗. 12 / 29 Newton's Method for Nonlinear Equations 13 / 29 Newton's Method for Nonlinear Equations 13 / 29 Newton's Method for Nonlinear Equations 13 / 29 Newton's Method for Nonlinear Equations 13 / 29 Newton's Method Applied to KKT Systems Apply Newton's around (x(k); y (k)) to KKT system ... r L(x; y) g(x) − A(x)y x = 0 , = 0: ry L(x; y) −c(x) ... gives linearized systems 2 2 (k) 2 (k) 3 0 (k) (k) (k) 1 rxx L rxy L g − A y dx 4 5 = − @ A : 2 (k) 2 (k) dy (k) ryx L ryy L c 2 (k) 2 (k) (k) Lagrangian linear in y ) ryy L = 0 and rxy L = −A : (k) (k) (k) (k) (k) H −A dx g − A y (k)T = − (k) ; −A 0 dy −c (k) 2 (k) where H = rxx L . 14 / 29 Newton's Method Applied to KKT Systems Newton system (k) (k) (k) (k) (k) H −A dx g − A y (k)T = − (k) ; −A 0 dy −c If we take full steps, (k+1) (k) (k+1) (k) x = x + dx and y = y + dy ; then system equivalent to KKT matrix (k) (k) (k) H −A dx −g T = : −A(k) 0 y (k+1) c(k) ... are the KKT conditions of quadratic program: minimize q(k)(d) = 1 dT H(k)d + g (k)T d + f (k) d 2 subject to c(k) + A(k)T d = 0: 15 / 29 SQP for Equality Constraints Equality-Constrained NLP minimize f (x) subject to c(x) = 0 x Sequential Quadratic Programming Method Given (x(0); y (0)), set k = 0 repeat Solve QP subproblem around x(k) minimize q(k)(d) = 1 dT H(k)d + g (k)T d + f (k) d 2 subject to c(k) + A(k)T d = 0: (k+1) ... let solution be (dx ; y ) (k+1) (k) Set x = x + dx and k = k + 1 until (x(k); y (k)) optimal; 16 / 29 SQP for Equality Constraints Convergence of Newton's Method Does not converge from an arbitrary starting point. Quasi-Newton approximations of Hessian, H(k) with γ(k) = rL(x(k+1); y (k+1)) − rL(x(k); y (k+1)) Theorem (Quadratic Convergence of SQP) Let x∗ be second-order sufficient, and assume that the KKT matrix H∗ −A∗ T is nonsingular −A∗ 0 If x(0) sufficiently close to x∗, SQP converges quadratically to x∗. 17 / 29 Outline 1 Methods for Nonlinear Optimization: Introduction 2 Convergence Test and Termination Conditions Feasible Stationary Points. Infeasible Stationary Points. 3 Sequential Quadratic Programming for Equality Constraints 4 SQP for General Problems 5 Sequential Linear and Quadratic Programming 18 / 29 SQP for General Constraints Sequential Quadratic Programming (SQP) for Equality Constraints minimize f (x) x subject to c(x) = 0 x ≥ 0 SQP Methods Date back to 1970's [Han, 1977, Powell, 1978] Minimize quadratic model, m(k)(d) Subject to linearization of constraints around x(k) for step d := x − x(k) 19 / 29 SQP for General Constraints Sequential Quadratic Programming (SQP) for Equality Constraints minimize f (x) x subject to c(x) = 0 x ≥ 0 QP Subproblem minimize m(k)(d) := g (k)T d + 1 dT H(k)d d 2 subject to c(k) + A(k)T d = 0 x(k) + d ≥ 0; New iterate x(k+1) = x(k) + d, and y (k+1) multipliers where H(k) ' r2L(x(k); y (k)) approx Hessian of Lagrangian y (k) multiplier estimate at iteration k 20 / 29 Discussion of SQP QP Subproblem minimize m(k)(d) := g (k)T d + 1 dT H(k)d d 2 subject to c(k) + A(k)T d = 0 x(k) + d ≥ 0; New iterate x(k+1) = x(k) + d, and y (k+1) multipliers If H(k) not positive definite on null-space equations ) QP subproblem is nonconvex ... SQP is OK with local min Solution QP subproblem can be computationally expensive Factorization of [A(k) : V ] is OK Factorization of dense reduced Hessian, Z T H(k)Z, slow ) look for cheaper alternatives for large-scale NLP 21 / 29 Outline 1 Methods for Nonlinear Optimization: Introduction 2 Convergence Test and Termination Conditions Feasible Stationary Points. Infeasible Stationary Points. 3 Sequential Quadratic Programming for Equality Constraints 4 SQP for General Problems 5 Sequential Linear and Quadratic Programming 22 / 29 Sequential Linear Programming (SLP) General nonlinear program (NLP): minimize f (x) subject to c(x) = 0 x ≥ 0 x LPs can be solved more efficiently than QPs Consider sequential linear algorithm Need a trust-region, because can be LP unbounded LP Trust-Region Subproblem minimize m(k)(d) = g (k)T d d subject to c(k) + A(k)T d = 0; (k) x + d ≥ 0; and kdk1 ≤ ∆k ; where ∆k > 0 trust-region radius; need ∆k ! 0 to converge Generalizes steepest descent method from Part II 23 / 29 SLQP Methods General nonlinear program (NLP): minimize f (x) subject to c(x) = 0 x ≥ 0 x Sequential Linear/Quadratic Programming (SLQP) methods Combine SLP (fast subproblem) & SQP (fast convergence) Use LP Trust-Region Subproblem minimize m(k)(d) = g (k)T d d subject to c(k) + A(k)T d = 0; (k) x + d ≥ 0; and kdk1 ≤ ∆k ; to find active set estimate (k) n (k) o A := i :[x ]i + d^i = 0 Given active-set estimate, do one equality SQP step 24 / 29 SLQP Methods Given estimate of active set, (k) n (k) o A := i :[x ]i + d^i = 0 ..

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    34 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us