
Numerical Optimization Unit 9: Penalty Method and Interior Point Method Unit 10: Filter Method and the Maratos Effect Che-Rung Lee Scribe: May 1, 2011 (UNIT 9,10) Numerical Optimization May 1, 2011 1 / 24 Penalty method The idea is to add penalty terms to the objective function, which turns a constrained optimization problem to an unconstrained one. Quadratic penalty function Example (For equality constraints) 2 2 ∗ min x1 + x2 subject to x + x − 2 = 0 (~x = (1; 1)) 1 2 ) Define Q(~x; ) = x + x + (x2 + x2 − 2)2 1 2 2 1 2 For = 1, 2 2 ∗ 1 + 2(x1 + x2 − 2)x1 0 ~x1 −1:1 rQ(~x; 1) = 2 2 = ; ∗ = 1 + 2(x1 + x2 − 2)x2 0 ~x2 −1:1 For = 10, 2 2 ∗ 1 + 20(x1 + x2 − 2)x1 0 ~x1 −1:0000001 rQ(~x; 10) = 2 2 = ; ∗ = 1 + 20(x1 + x2 − 2)x2 0 ~x2 −1:0000001 (UNIT 9,10) Numerical Optimization May 1, 2011 2 / 24 Size of It seems the larger , the better solution is. When is large, matrix r2Q ≈ rcrcT is ill-conditioned. Q(x; ) = f (x) + (c(x))2 2 rQ = rf + crc r2Q = r2f + rcrcT + cr2c cannot be too small either. Example 2 2 min −5x1 + x2 s.t. x1 = 1. ~x Q(~x; ) = −5x2 + x2 + (x − 1)2. 1 2 2 1 For < 10, the problem min Q(~x; ) is unbounded. (UNIT 9,10) Numerical Optimization May 1, 2011 3 / 24 Quadratic penalty function Picks a proper initial guess of and gradually increases it. Algorithm: Quadratic penalty function 1 Given 0 > 0 and ~x0 2 For k = 0; 1; 2; ::: k X 2 1 Solve min Q(:; k ) = f (~x) + ci (~x). ~x 2 i2E 2 If converged, stop 3 Increase k+1 > k and find a new xk+1 Problem: the solution is not exact for ≤ 1. (UNIT 9,10) Numerical Optimization May 1, 2011 4 / 24 Augmented Lagrangian method Use the Lagrangian function to rescue the inexactness problem. Let X X L(~x; ~,) = f (~x) − c (~x) + c2(~x) i i 2 i i2" i2" X X rL = rf (~x) − i rci (~x) + ci (~x)rci : i2" i2" P By the Lagrangian theory, rL = rf − i2" (i − ci ) rci : | {z∗ } i −1 At the optimal solution, c (~x∗) = (∗ − ): i i i ∗ If we can approximate i −! i , k need not be increased indefinitely, k+1 k i = i − k ci (xk ) Algorithm: update i at each iteration. (UNIT 9,10) Numerical Optimization May 1, 2011 5 / 24 Inequality constraints There are two approaches to handle inequality constraints. 1 Make the object function nonsmooth (non-differentiable at some points). 2 Add slack variable to turn the inequality constraints to equality constraints. ci (~x) − si = 0 ci ≥ 0 ) si ≥ 0 But then we have bounded constraints for slack variables. We will focus on the second approach here. (UNIT 9,10) Numerical Optimization May 1, 2011 6 / 24 Inequality constraints Suppose the augmented Lagrangian method is used and all inequality constraints are converted to bounded constraints. For a fixed and ~, m m ~ X X 2 min L(~x; , ) = f (~x) − i ci (~x) + ci (~x) ~x 2 i=1 i=1 s.t. ~` ≤ ~x ≤ ~u The first order necessary condition for ~x to be a solution of the above problem is ~ ~ ~x = P(~x − rx LA(~x; , ); `;~u); where 8 `i ; if gi ≤ `i ; ~ < P(~g; `;~u) = gi ; if gi 2 (`i ; ui ); for all i = 1; 2;:::; n: : ui ; if gi ≥ ui . (UNIT 9,10) Numerical Optimization May 1, 2011 7 / 24 Nonlinear gradient projection method Sequential quadratic programming + trust region method to solve ~ min~x f (~x) s.t. ` ≤ ~x ≤ ~u Algorithm: Nonlinear gradient projection method 1 At each iteration, build a quadratic model 1 q(~x) = (x − x )T B (x − x ) + rf T (x − x ) 2 k k k k k 2 where Bk is SPD approximation of r f (xk ). 2 For some Δk , use the gradient projection method to solve min q(~x) ~x ~ s.t. max(`;~xk − Δk ) ≤ ~x ≤ max(~u;~xk + Δk ); 3 Update Δk and repeat 1-3 until converge. (UNIT 9,10) Numerical Optimization May 1, 2011 8 / 24 Interior point method Consider the problem min~x f (~x) s.t. CE (~x) = 0 CI (~x) −~s = 0 ~s ≥ 0 where ~s are slack variables. The interior point method starts a point inside the feasible region, and builds \walls" on the boundary of the feasible region. A barrier function goes to infinity when the input is close to zero. m X CE (~x) = 0 min f (~x) − log(si ) s.t. (1) ~x;~s CI (~x) −~s = 0 i=1 The function f (x) = − log x ! 1 as x ! 0. : barrier parameter (UNIT 9,10) Numerical Optimization May 1, 2011 9 / 24 An example Example (min −x + 1, s.t. x ≤ 1) min~x −x + 1 − ln(1 − x) @ y 6 1 − x ≥ 0 @ @ = 1; x∗ = 0:00005 @ @ = 0:1; x∗ = 0:89999 - @ x = 0:01; x∗ = 0:989999 @ @ = 10−5; x∗ = 0:99993 @ The Lagrangian of (1) is m X T T L(~x;~s;~y;~z) = f (~x) − log(si ) − ~y CE (~x) − ~z (CI (~x) −~s) i=1 1 Vector ~y is the Lagrangian multiplier of equality constraints. 2 Vector ~z is the Lagrangian multiplier of inequality constraints. (UNIT 9,10) Numerical Optimization May 1, 2011 10 / 24 The KKT conditions The KKT conditions The KKT conditions for (1) rx L = 0 ) rf − AE~y − AI~z = 0 r L = 0 ) SZ − I = 0 s (2) ry L = 0 ) CE (~x) = 0 rz L = 0 ) CI (~x) −~s = 0 Matrix S = diag(~s) and matrix Z = diag(~z). Matrix AE is the Jacobian of CE and matrix AI is the Jacobian of CI . (UNIT 9,10) Numerical Optimization May 1, 2011 11 / 24 Newton's step 0 1 0 1 rx L rf − AE~y − AI~z B rs L C B SZ − I C Let F = B C = B C. @ ry L A @ CE (~x) A rz L CI (~x) −~s The interior point method uses Newton's method to solve F = 0. 2 3 rxx L 0 −AE (~x) −AI (~x) 6 0 Z 0 S 7 rF = 6 7 4 AE (~x) 0 0 0 5 AI (~x) −I 0 0 Newton's step 0 1 ~px ~xk+1 = ~xk + x~px B ~ps C ~sk+1 = ~sk + s~ps rF = B C = −F @ ~py A ~yk+1 = ~yk + y~py ~pz ~zk+1 = ~zk + z~pz (UNIT 9,10) Numerical Optimization May 1, 2011 12 / 24 Algorithm: Interior point method (IPM) Algorithm: Interior point method (IPM) 1 Given initial ~x0;~s0;~y0;~z0, and 0 2 For k = 0; 1; 2;::: until converge (a) Compute ~px ;~ps ;~py ;~pz and x ; s ; y ; z (b)( ~xk+1;~sk+1;~yk+1;~zk+1) = (~xk ;~sk ;~yk ;~zk ) + ( x~px ; s~ps ; y~py ; z~pz ) (c) Adjust k+1 < k (UNIT 9,10) Numerical Optimization May 1, 2011 13 / 24 Some comments of IMP Some comments of the interior point method 1 The complementarity slackness condition says si zi = 0 at the optimal solution, by which, the parameter , SZ = I , needs to decrease to zero as the current solution approaches to the optimal solution. 2 Why cannot we set zero or small in the beginning? Because that will make ~xk going to the nearest constraint, and the entire process will move along constraint by constraint, which again becomes an exponential algorithm. 3 To keep ~xk (or ~s and ~z) too close any constraints, IPM also limits the step size of ~s and ~z max s = maxf 2 (0; 1];~s + ~ps ≥ (1 − )~sg max z = maxf 2 (0; 1];~z + ~pz ≥ (1 − )~zg (UNIT 9,10) Numerical Optimization May 1, 2011 14 / 24 Interior point method for linear programming We will use linear programming to illustrate the details of IPM. The primal The dual min ~cT~x ~ T ~ ~x max~ b s.t. A~x = ~b; s.t. AT ~ +~s = ~c; ~x ≥ 0: ~s ≥ 0: KKT conditions AT ~ +~s = ~c A~x = ~b xi si = 0 ~x ≥ 0;~s ≥ 0 (UNIT 9,10) Numerical Optimization May 1, 2011 15 / 24 Solve problem 0 1 0 1 x1 s1 B x2 C B s2 C Let X = B C ; S = B C ; B .. C B .. C @ . A @ . A xn sn 2 AT ~ +~s − ~c 3 F = 4 A~x − ~b 5 X~s − ~e The problem is to solve F = 0 (UNIT 9,10) Numerical Optimization May 1, 2011 16 / 24 Newton's method Using Newton's method 2 0 AT I 3 rF = 4 A 0 0 5 S 0 X 2 3 ~px ~xk+1 = ~xk + x~px ~ ~ rF 4 ~p 5 = −F k+1 = k + ~p ~pz ~zk+1 = ~zk + z~pz How to decide k ? 1 k = n~xk : ∗~sk is called duality measure. (UNIT 9,10) Numerical Optimization May 1, 2011 17 / 24 The central path The central path 0 1 x The central path: a set of points, p() = @ A, defined by the z solution of the equation AT ~ +~s = ~c A~x = ~b xi si = i = 1; 2; ⋅ ⋅ ⋅ ; n ~x;~s > 0 (UNIT 9,10) Numerical Optimization May 1, 2011 18 / 24 Algorithm Algorithm: The interior point method for solving linear programming 1 Given an interior point ~x0 and the initial guess of slack variables ~s0 2 For k = 0; 1;::: 0 T 1 0 1 0 ~ 1 0 A I Δxk b − A~xk T ~ (a) Solve @ A 0 0 A @ Δk A = @ ~c −~sk − A k A for k k k k S 0 X Δsk −X S e + k k e k 2 [0; 1].
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages24 Page
-
File Size-