An Interior Point Method with a Primal-Dual Quadratic Barrier Penalty Function for Nonlinear Semidefinite Programming

An Interior Point Method with a Primal-Dual Quadratic Barrier Penalty Function for Nonlinear Semidefinite Programming

An interior point method with a primal-dual quadratic barrier penalty function for nonlinear semidefinite programming Atsushi Kato ∗ and Hiroshi Yabe y and Hiroshi Yamashita z August 20, 2011 (Revised June 1, 2012) Abstract In this paper, we consider an interior point method for nonlinear semidefinite program- ming. Yamashita, Yabe and Harada presented a primal-dual interior point method in which a nondifferentiable merit function was used. By using shifted barrier KKT condi- tions, we propose a differentiable primal-dual merit function within the framework of the line search strategy, and prove the global convergence property of our method. Keyword. Nonlinear semidefinite programming, Primal-dual interior point method, Primal-dual quadratic barrier penalty function, Global convergence 1 Introduction In this paper, we consider the following nonlinear semidefinite programming (SDP) prob- lem: { min f(x); x 2 Rn; (1.1) s:t: g(x) = 0;X(x) ર 0; where the functions f : Rn ! R, g : Rn ! Rm and X : Rn ! Sp are sufficiently smooth, and Sp denotes the set of p-th order real symmetric matrices. By X(x) ર 0 and X(x) ≻ 0, we mean that the matrix X(x) is positive semidefinite and positive definite, respectively. Numerical methods which solve the nonlinear SDP have been developed and studied by several authors [3, 4, 5, 6, 7, 9, 10, 11, 12, 13, 15, 17, 21]. Drummond, Iusem and Svaiter [4] studied the central path of a nonlinear convex SDP and discussed the behavior between the logarithmic barrier function and objective function. Leibfritz and Mostafa ∗Department of Mathematical Information Science, Tokyo University of Science, 1-3, Kagurazaka, Shinjuku-ku, Tokyo 162-8601, Japan. e-mail: [email protected] yDepartment of Mathematical Information Science, Tokyo University of Science, 1-3, Kagurazaka, Shinjuku-ku, Tokyo 162-8601, Japan. e-mail: [email protected] zMathematical Systems Inc., 2-4-3, Shinjuku, Shinjuku-ku, Tokyo 160-0022, Japan e- mail: [email protected] 1 [15] proposed an interior point trust region algorithm. Huang, Yang and Teo [9] trans- formed a nonlinear SDP into a mathematical program with a matrix equality constraint and proposed the sequential quadratic penalty method. Correa and Ramirez [3] presented the sequentially semidefinite programming local method which is based on the sequential quadratic programming method with a line search strategy. Kanzow, Nagel, Kato and Fukushima [12] proposed the trust region-type method. This method solves the subprob- lem which can be converted to a linear semidefinite program at each iteration. Yamashita, Yabe and Harada [21] proposed a primal-dual interior point method with a line search strategy. Huang, Teo and Yang [10] introduced an approximate augmented Lagrangian function for nonlinear SDP. Huang, Yang and Teo [11] considered a nonlinear SDP with a matrix equality constraint and proposed the penalty method. Gomez and Ramirez [7] proposed a filter method for nonlinear SDP. Though Yamashita, Yabe and Harada [21] derived a primal-dual merit function, their function was not differentiable. Thus it is significant to consider a differentiable primal- dual merit function. In this paper, following the idea of Yamashita and Yabe [20] for nonlinear programming problems, we propose a primal-dual interior point method whose merit function is differentiable for nonlinear SDP problems. Our primal-dual interior point method can obtain the global convergence property under weaker assumptions than those in [21]. This paper is organized as follows. In Section 2, we introduce the optimality conditions for problem (1.1) and some notations, and briefly review the primal-dual interior point method of Yamashita et al. [21]. In Section 3, we describe how to get a search direction and a step size. We prove the global convergence of the proposed method in Section 4. In what follows, k∙k, k∙k1 and k∙k1 denote the l2, l1 and l1 norms for vectors. k∙kF denotes the Frobenius norm for matrices. We define the following norm 0 1 s s ( ) s 2 @ t A = + kUk2 t F U ∗ for (s; t; U) 2 Rn × Rm × Sp. The superscript T denotes the transpose of a vector or a matrix. I denotes the identity matrix. tr(M) denotes the trace of the matrix M. We T define the inner product hM1;M2i by hM1;M2i = tr(M1M2 ) for any matrices M1 and M2 in Rp×p. 2 Optimality conditions and algorithm for finding a KKT point In this section, we introduce the optimality conditions for problem (1.1) and Algorithm SDPIP that finds a KKT point. At first, we confirm the optimality conditions for problem (1.1). Define the Lagrangian function of problem (1.1) as follows. L(w) = f(x) − yT g(x) − hX(x);Zi ; where w = (x; y; Z); and y 2 Rm and Z 2 Sp are the Lagrange multiplier vector and matrix which correspond to the equality and positive semidefiniteness constraints. The 2 first-order necessary conditions for optimality of problem (1.1), which is called the Karush- Kuhn-Tucker (KKT) conditions, are given by (see [2]): 0 1 0 1 rxL(w) 0 @ A @ A r0(w) ≡ g(x) = 0 (2.1) X(x)Z 0 and X(x) ર 0;Z ર 0; (2.2) where rxL(w) is a gradient vector of the Lagrangian function given by ∗ rxL(w) = rf(x) − rg(x)y − A (x)Z: Here the matrix rg(x) is given by n×m rg(x) = (rg1(x) ⋅ ⋅ ⋅ rgm(x)) 2 R ; and A∗(x) is an operator such that for Z, 0 1 hA1(x);Zi B . C A∗(x)Z = @ . A ; hAn(x);Zi p where matrices Ai(x) 2 S are defined by @X(x) Ai(x) = @xi for i = 1; ⋅ ⋅ ⋅ ; n. Given a positive barrier parameter 휇, interior point methods usually replace the com- plementarity condition X(x)Z = 0 by X(x)Z = 휇퐼 and deal with the barrier KKT conditions 0 1 0 1 rxL(w) 0 @ g(x) A = @ 0 A X(x)Z − 휇퐼 0 and X(x) ≻ 0;Z ≻ 0 instead of conditions (2.1) and (2.2). These conditions are derived from the necessary condition for optimality of the following problem: n min FBP 1(x; 휇) = f(x) + 휌kg(x)k1 − 휇 log(detX(x)); x 2 R ; (2.3) X(x)≻0 where 휌 is a given positive penalty parameter (see [19]). In connection with (1:1), we consider the following problem: 1 2 n min FBP 2(x; 휇) = f(x) + kg(x)k − 휇 log(detX(x)); x 2 R : (2.4) X(x)≻0 2휇 3 The necessary conditions for the optimality of this problem are given by 1 rF (x; 휇) = rf(x) + rg(x)g(x) − 휇A∗(x)X−1(x) = 0; (2.5) BP 2 휇 and X(x) ≻ 0. We define the variables y and Z by y = −g(x)/휇 and Z = 휇푋(x)−1. Then the above conditions are written as 0 1 0 1 rxL(w) 0 r(w; 휇) = @ g(x) + 휇푦 A = @ 0 A (2.6) X(x)Z − 휇퐼 0 and X(x) ≻ 0;Z ≻ 0: (2.7) We call these conditions the shifted barrier KKT (SBKKT) conditions, the point which satisfies SBKKT conditions the SBKKT point and the point which satisfies conditions (2.7) the interior point. It should be noted that we deal with x; y and Z as independent variables and that condition (2.6) reduces to condition (2.1) when 휇 = 0, i.e., r0(w) = r(w; 0). In order to obtain a KKT point, we propose the following algorithm, which uses the SBKKT conditions. Algorithm SDPIP Step 0. (Initialization) Set " > 0, Mc > 0 and k = 0. Let a positive decreasing sequence f휇kg ; 휇k ! 0 be given. Step 1. (Approximate SBKKT point) Find an interior point wk+1 that satisfies kr(wk+1; 휇k)k∗ ≤ Mc휇k: (2.8) Step 2. (Termination) If kr0(wk+1)k∗ ≤ ", then stop. Step 3. (Update) Set k := k + 1 and go to Step 1. In contrast to the interior point method in [21], stopping criterion (2.8) contains g(x) + 휇푦. We should note that the global convergence property of Algorithm SDPIP can be shown in the same way as Theorem 1 of [21]. 3 How to obtain an approximate SBKKT point Algorithm SDPIP given in the previous section needs to find an approximate SBKKT point at each iteration. In this section, we propose an algorithm to obtain such a point, which will be described as Algorithm SDPLS at the end of this section. Algorithm SDPLS uses the following iterative scheme: wk+1 = wk + 훼kΔwk; 4 where the subscript k denotes an iteration count of Algorithm SDPLS, Δwk = (Δxk; Δyk; n m p ΔZk) 2 R × R × S is the k-th search direction and 훼k > 0 is the k-th step size. In what follows, we denote X(x) simply by X if it is not confusing. Note that an initial interior point w0 and a fixed barrier parameter 휇 > 0 are taken over from Algorithm SDPIP. This section is organized as follows. In Section 3.1, we introduce a Newton-like method to calculate a search direction Δwk. In Section 3.2, we propose our merit function. In Section 3.3, we introduce how to obtain a suitable step size 훼k which guarantees that the new point wk+1 is an interior point. In Section 3.4, we give Algorithm SDPLS. 3.1 How to obtain a Newton direction In this subsection, we omit the subscript k and we assume that X ≻ 0 and Z ≻ 0. To obtain a search direction Δw, we apply a Newton-like method to the system of equations (2.6).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    21 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us