
Optimization I; Chapter 3 56 Chapter 3 Quadratic Programming 3.1 Constrained quadratic programming problems A special case of the NLP arises when the objective functional f is quadratic and the constraints h; g are linear in x 2 lRn. Such an NLP is called a Quadratic Programming (QP) problem. Its general form is 1 minimize f(x) := xT Bx ¡ xT b (3.1a) 2 over x 2 lRn subject to A1x = c (3.1b) A2x · d ; (3.1c) n£n m£n p£n n where B 2 lR is symmetric, A1 2 lR ;A2 2 lR , and b 2 lR ; c 2 lRm ; d 2 lRp. As we shall see in this chapter, the QP (3.1a)-(3.1c) can be solved iteratively by active set strategies or interior point methods where each iteration requires the solution of an equality constrained QP problem. 3.2 Equality constrained quadratic programming If only equality constraints are imposed, the QP (3.1a)-(3.1c) reduces to 1 minimize f(x) := xT Bx ¡ xT b (3.2a) 2 over x 2 lRn subject to Ax = c ; (3.2b) where A 2 lRm£n ; m · n. For the time being we assume that A has full row rank m. The KKT conditions for the solution x¤ 2 lRn of the QP (3.2a),(3.2b) give rise to the following linear system µ ¶ µ ¶ µ ¶ BAT x¤ b = ; (3.3) A 0 ¸¤ c | {z } =: K where ¸¤ 2 lRm is the associated Lagrange multiplier. We denote by Z 2 lRn£(n¡m) the matrix whose columns span KerA, i.e., AZ = 0. Optimization I; Chapter 3 57 De¯nition 3.1 KKT matrix and reduced Hessian The matrix K in (3.3) is called the KKT matrix and the matrix ZT BZ is referred to as the reduced Hessian. Lemma 3.2 Existence and uniqueness Assume that A 2 lRm£n has full row rank m · n and that the reduced Hessian ZT BZ is positive de¯nite. Then, the KKT matrix K is nonsingular. Hence, the KKT system (3.3) has a unique solution (x¤; ¸¤). Proof: The proof is left as an exercise. ² Under the conditions of the previous lemma, it follows that the second order su±cient optimality conditions are satis¯ed so that x¤ is a strict local minimizer of the QP (3.2a),(3.2b). A direct argument shows that x¤ is in fact a global minimizer. Theorem 3.3 Global minimizer Let the assumptions of Lemma 3.2 be satis¯ed and let (x¤; ¸¤) be the unique solution of the KKT system (3.3). Then, x¤ is the unique global solution of the QP (3.2a),(3.2b). Proof: Let x 2 F be a feasible point, i.e., Ax = c, and p := x¤ ¡ x. Then, Ap = 0. Substituting x = x¤ ¡ p into the objective functional, we get 1 f(x) = (x¤ ¡ p)T B(x¤ ¡ p) ¡ (x¤ ¡ p)T b = 2 1 = pT Bp ¡ pT Bx¤ + pT b + f(x¤) : 2 Now, (3.3) implies Bx¤ = b ¡ AT ¸¤. Observing Ap = 0, we have pT Bx¤ = pT (b ¡ AT ¸¤) = pT b ¡ (Ap)T ¸¤ ; | {z } = 0 whence 1 f(x) = pT Bp + f(x¤) : 2 In view of p 2 Ker A, we can write p = Zu ; u 2 lRn¡m, and hence, 1 f(x) = uT ZT BZu + f(x¤) : 2 Since ZT BZ is positive de¯nite, we deduce f(x) > f(x¤). Consequently, x¤ is the unique global minimizer of the QP (3.2a),(3.2b). ² Optimization I; Chapter 3 58 3.3 Direct solution of the KKT system As far as the direct solution of the KKT system (3.3) is concerned, we distinguish between symmetric factorization and the range-space and null-space approach. 3.3.1 Symmetric inde¯nite factorization A possible way to solve the KKT system (3.3) is to provide a symmetric fac- torization of the KKT matrix according to P T KP = LDLT ; (3.4) where P is an appropriately chosen permutation matrix, L is lower triangular with diag(L) = I, and D is block diagonal. Based on (3.4), the KKT system (3.3) is solved as follows: µ ¶ b solve Ly = P T ; (3.5a) c solve Dy^ = y ; (3.5b) solve LT y~ =y ^ ; (3.5c) µ ¶ x¤ set = P y~ : (3.5d) ¸¤ 3.3.2 Range-space approach The range-space approach applies, if B 2 lRn£n is symmetric positive de¯nite. Block Gauss elimination of the primal variable x¤ leads to the Schur complement system AB¡1AT ¸¤ = AB¡1b ¡ c (3.6) with the Schur complement S 2 lRm£m given by S := AB¡1AT . The range- space approach is particularly e®ective, if ² B is well conditioned and easily invertible (e.g., B is diagonal or block- diagonal), ² B¡1 is known explicitly (e.g., by means of a quasi-Newton updating for- mula), ² the number m of equality constraints is small. Optimization I; Chapter 3 59 3.3.3 Null-space approach The null-space approach does not require regularity of B and thus has a wider range of applicability than the range-space approach. We assume that A 2 lRm£n has full row rank m and that ZT BZ is positive de¯nite, where Z 2 lRn£(n¡m) is the matrix whose columns span Ker A which can be computed by QR factorization (cf. Chapter 2.4). We partition the vector x¤ according to ¤ x = Y wY + ZwZ ; (3.7) n£m n£n m where Y 2 lR is such that [YZ] 2 lR is nonsingular and wY 2 lR ; wZ 2 lRn¡m. Substituting (3.7) into the second equation of (3.3), we obtain ¤ Ax = AY wY + |{z}AZ wZ = c ; (3.8) = 0 i.e., Y wY is a particular solution of Ax = c. Since A 2 lRm£n has rank m and [YZ] 2 lRn£n is nonsingular, the product m£m matrix A[YZ] = [AY 0] 2 lR is nonsingular. Hence, wY is well determined by (3.8). On the other hand, substituting (3.7) into the ¯rst equation of (3.3), we get T ¤ BY wY + BZwZ + A ¸ = b : Multiplying by ZT and observing ZT AT = (AZ)T = 0 yields T T T Z BZwZ = Z b ¡ Z BY wY : (3.9) The reduced KKT system (3.9) can be solved by a Cholesky factorization of the T (n¡m)£(n¡m) reduced Hessian Z BZ 2 lR . Once wY and wZ have been computed as the solutions of (3.8) and (3.9), x¤ is obtained according to (3.7). Finally, the Lagrange multiplier turns out to be the solution of the linear system arising from the multiplication of the ¯rst equation in (3.7) by Y T : (AY )T ¸¤ = Y T b ¡ Y T Bx¤ : (3.10) 3.4 Iterative solution of the KKT system If the direct solution of the KKT system (3.3) is computationally too costly, the alternative is to use an iterative method. An iterative solver can be ap- plied either to the entire KKT system or, as in the range-space and null-space approach, use the special structure of the KKT matrix. Optimization I; Chapter 3 60 3.4.1 Krylov methods The KKT matrix K 2 lR(n+m)£(n+m) is inde¯nite. In fact, if A has full row rank m, K has n positive and m negative eigenvalues. Therefore, for the iterative solution of (3.3) Krylov subspace methods like GMRES (Generalized Minimum RESidual) and QMR (Quasi Minimum Residual) are appropriate candidates. 3.4.2 Transforming range-space iterations We assume B 2 lRn£n to be symmetric positive de¯nite and suppose that B~ is some symmetric positive de¯nite and easily invertible approximation of B such that B~¡1B » I. (n+m)£(n+m) We choose KL 2 lR as the lower triangular block matrix µ ¶ I 0 K = ; (3.11) L ¡AB~¡1 I which gives rise to the regular splitting µ ¶ µ ¶ BA~ T B~(I ¡ B~¡1B) 0 K K = ¡ ; (3.12) L 0 S~ A(I ¡ B~¡1B) 0 | {z } | {z } =: M1 =: M2 » 0 where S~ 2 lRm£m is given by S~ := ¡ AB~¡1AT : (3.13) We set à := (x; ¸)T ; ® := (b; c)T : Given an iterate Ã(0) 2 lRn+m, we compute Ã(k) ; k 2 lN; by means of the transforming range-space iterations (k+1) (k) ¡1 (k) à = à + M1 KL(® ¡ Kà ) = (3.14) ¡1 (k) ¡1 = (I ¡ M1 KLK)à + M1 KL® ; k ¸ 0 : The transforming range-space iteration (3.14) will be implemented as follows: (k) (k) (k) T (k) d = (d1 ; d2 ) := ® ¡ Kà ; (3.15a) (k) (k) ~¡1 (k) (k) T KLd = (d1 ; ¡ AB d1 + d2 ) ; (3.15b) (k) (k) M1' = KLd ; (3.15c) Ã(k+1) = Ã(k) + '(k) : (3.15d) Optimization I; Chapter 3 61 3.4.3 Transforming null-space iterations We assume that x 2 lRn and ¸ 2 Rm admit the decomposition T m1 n¡m1 x = (x1; x2) ; x1 2 lR ; x2 2 lR ; (3.16a) T m1 m¡m1 ¸ = (¸1; ¸2) ; ¸1 2 lR ; ¸2 2 lR ; (3.16b) and that A 2 lRm£n and B 2 lRn£n can be partitioned by means of µ ¶ µ ¶ A A B B A = 11 12 ;B = 11 12 ; (3.17) A21 A22 B21 B22 m1£m1 where A11;B11 2 lR with nonsingular A11. Partitioning the right-hand side in (3.3) accordingly, the KKT system takes the form 0 1 0 1 0 1 T T ¤ B11 B12 j A11 A21 x1 b1 B T T C B ¤ C B C B B21 B22 j A12 A22 C B x2 C B b2 C B C B C B C B ¡¡ ¡¡ j ¡¡ ¡¡ C B ¡ C = B ¡ C : (3.18) @ A @ ¤ A @ A A11 A12 j 0 0 ¸1 c1 ¤ A21 A22 j 0 0 ¸2 c2 We rearrange (3.18) by exchanging the second and third rows and columns 0 1 0 1 0 1 T T ¤ B11 A11 j B12 A21 x1 b1 B C B ¤ C B C B A11 0 j A12 0 C B ¸1 C B c1 C B C B C B C B ¡¡ ¡¡ j ¡¡ ¡¡ C B ¡ C = B ¡ C : (3.19) @ T T A @ ¤ A @ A B21 A12 j B22 A22 x2 b2 ¤ A21 0 j A22 0 ¸2 c2 T Observing B12 = B21, in block form (3.19) can be written as follows µ ¶ µ ¶ µ ¶ ABT ä ® 1 = 1 ; (3.20) BD ä ® | {z } | {z2 } | {z2 } =: K =: ä =: ® ¤ ¤ ¤ T T where Ãi := (xi ; ¸i ) ; ®i := (bi; ci) ; 1 · i · 2.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages21 Page
-
File Size-