The PPADMM Method for Solving Quadratic Programming Problems

The PPADMM Method for Solving Quadratic Programming Problems

mathematics Article The PPADMM Method for Solving Quadratic Programming Problems Hai-Long Shen 1,* and Xu Tang 2 1 Department of Mathematics College of Sciences, Northeastern University Shenyang, Shenyang 100819, China 2 Northwest Institute of Mechanical and Electrical Engineering Xianyang, Xianyang 712000, China; [email protected] * Correspondence: [email protected]; Tel.: +86-024-83683382 Abstract: In this paper, a preconditioned and proximal alternating direction method of multipliers (PPADMM) is established for iteratively solving the equality-constraint quadratic programming problems. Based on strictly matrix analysis, we prove that this method is asymptotically convergent. We also show the connection between this method with some existing methods, so it combines the advantages of the methods. Finally, the numerical examples show that the algorithm proposed is efficient, stable, and flexible for solving the quadratic programming problems with equality constraint. Keywords: quadratic programming problem; global convergence; preconditioning and proximal terms; iterative methods; convex problems 1. Introduction This manuscript introduces the following convex optimization model with linear Citation: Shen, H.-L.; Tang, X. The constraints and a separable objective function: PPADMM Method for Solving Quadratic Programming Problems. min f (x) + f (y), 1 2 (1) Mathematics 2021, 9, 941. https:// s.t. Ax + By = b, doi.org/10.3390/math9090941 × × where A 2 Rp n and B 2 Rp m are two matrices, and b 2 Rp is a known vector. The Academic Editor: Ioannis K. Argyros n m objective function f1 : R ! R and f2 : R ! R are two quadratic functions defined by Received: 28 February 2021 1 T T f1(x) = 2 x Fx + x f , Accepted: 9 April 2021 1 T T (2) f2(y) = 2 y Gy + y g, Published: 23 April 2021 × × where F 2 Rn n, G 2 Rm m are symmetric positive semidefinite matrices and f 2 Rn, g 2 Rm Publisher’s Note: MDPI stays neutral are the known vectors. The class of convex minimization problems arises in many areas with regard to jurisdictional claims in of computational science and engineering applications such as compressed sensing [1], published maps and institutional affil- financial [2,3], image restoration [4–6], network optimization problems [7,8], and traffic iations. planning convex problems [9–12]. The model (1)–(2) captures many applications in differ- ent areas—see the l1-norm regularized least-squares problems in [12,13], the total variation image restoration in [13–16], and the standard quadratic programming problems in [7,13]. p×p Let H 2 R be the symmetric positive definite matrix, < ·, · >H represents the Copyright: © 2021 by the authors. weighted inner product with the weighting matrix H, k · k stands for the Euclidean norm, Licensee MDPI, Basel, Switzerland. and k · kH stands for the analogous weighted matrix norm. Note that for the vectors This article is an open access article p p×p 1 u, v 2 R and the matrix X 2 R , it holds that < u, v >H=< Hu, v >, kukH = kH 2 uk distributed under the terms and 1 − 1 k k = k 2 2 k conditions of the Creative Commons and X H H XH . p Attribution (CC BY) license (https:// If < u, v >H= 0, we say that u, v 2 R are H-orthogonal, which is denoted by u?Hv. creativecommons.org/licenses/by/ In particular, if H is the identity matrix, then the vectors u and v are orthogonal, which is 4.0/). simply denoted by u?v. For z 2 C, z stands for its conjugate complex. Mathematics 2021, 9, 941. https://doi.org/10.3390/math9090941 https://www.mdpi.com/journal/mathematics Mathematics 2021, 9, 941 2 of 15 The problem (1)–(2) is mathematically equivalent to the unconstraint optimization problem [7] maxminy(x, y, l) (3) l x,y y(x, y, l) is the augmented Lagrangian function defined as b y(x, y, l) = f (x) + f (y)− < Ax + By − b, l > + kAx + By − bk2, (4) 1 2 2 where l is the Lagrangian multiplier, and b is a regularization parameter. In a word, a p point (x∗, y∗) is the solution to the problem (1)–(2) if and only if there exists l∗ 2 R such n m p that the point (x∗, y∗, l∗) 2 R × R × R is the solution to the problem (3)–(4) [7]. The most common method to solve the problem (3)–(4) is the alternating direction method of multipliers (ADMM) [7], in which each iteration of the augmented Lagrangian method (ALM) [17,18] has a Gauss–Seidel decomposition. The scheme of the ADMM method for (1)–(2) is 8 n o (k+1) = ( (k) (k)) > x argmin y x, y , l , < n o y(k+1) = argmin y(x(k+1), y, l(k)) , (5) > : l(k+1) = l(k) − b(Ax(k+1) + By(k+1) − b). A significant advantage of ADMM is that each subproblem involves only one of the functions f 1 and f 2 in the original problem. Therefore, the variables x and y are treated separately during iterations, which makes solving the subproblems in (3)–(4) evidently easier than the original problem (1)–(2). The ADMM’s easy implementation and impressive efficiency has recently received extensive attention from many different areas. The ADMM is a very effective method to solve the convex optimization problem. Glowinski and Marrocco [19] were the first batch of scientists to describe it. Gabay [20], Glowinski and Le Tallec [21], as well as Eckstein and Bertsekas [22] studied some convergence results related to ADMM. It is suitable to solve convex programming problems with separable variables [7,23,24], and they are widely used in image processing, statistical learning, and machine learning [4–9]. For the convex optimization problems, some methods based on gradient operator are sensitive to the choice of the iteration step. If the parameters are not properly selected, the algorithm may not be convergent. In contrast, the ADMM is robust to the choice of parameters: under some mild conditions, the method can guarantee convergence for any positive parameter of its single parameter. When the objective function is a quadratic function, its global convergence is proved [13,24], and this method is linearly convergent. For example, the ADMM converges linearly to the optimal solution for problems (1) and (2). Although the convergence of the ADMM is perfectly solved, the accurate estimation of its convergence rate is still in its early stages; see, for example, [13,14]. Since the classical ADMM algorithm is inefficient in solving the accuracy of subprob- lem, Deng and Wo proposed a generalized ADMM in the literature [25], which adds the 1 (k) 2 1 (k) 2 proximal terms 2 kx − x kP and 2 ky − y kT to the x- and y-subproblems, respectively. Its scheme for (1)–(2) is 8 n o (k+1) = ( (k) (k)) + 1 k − (k)k2 > x argmin y x, y , l 2 x x P , <> (k+1) n (k+1) (k) 1 (k) 2 o y = argmin y(x , y, l ) + 2 ky − y kT , (6) > :> l(k+1) = l(k) − ab Ax(k+1) + By(k+1) − b , p 1+ 5 where a 2 (0, 2 ), and the matrices P and T are symmetric positive semidefinite. Deng and Wo solved the problem of global and linear convergence of the generalized ADMM and gave the mathematical proof where P and T are symmetric positive semidefi- Mathematics 2021, 9, 941 3 of 15 nite matrices. What is more, in order to make the subproblems of the generalized ADMM easier to solve and more efficient to run, the ultimate goal is to choose the adequate P and T. In order to iteratively solve the linear constraint quadratic programming problem (1)–(2), Bai and Tao [18] proposed a class of preconditioned alternating variable minimization with multiplier (PAVMM) methods by the matrix preconditioning strategy and utilizing a parameter accelerating technique, which is based on a weighted inner product and the homologous weighted norm. The iteration scheme is as follows: 8 ( + ) n ( ) ( ) o > x k 1 = argmin ye(x, y k , l k ) , > < ( + ) n ( + ) ( ) o y k 1 = argmin ye(x k 1 , y, l k ) , (7) > :> l(k+1) = l(k) − aQ−1W−1 Ax(k+1) + By(k+1) − b , where ye(x, y, l) is the following weighted augmented Lagrangian function: b 2 ye(x, y, l) = f1(x) + f2(y)− < Ax + By − b, l > −1 + kAx + By − bk −1 , (8) W 2 W the matrices W and Q are symmetric positive semidefinite and nonsingular, and a is the relaxation parameter. Actually, the PAVMM method is a class of preconditioned alternating direction method of multipliers (PADMM). Therefore, in this manuscript, the PAVMM method is recorded as PADMM. In particular, the ADMM is a special case of the PADMM. If W = Q = I and a = b, the PADMM automatically reduces to the ADMM. Later, Bai and Tao [26] also establish a preconditioned and relaxed alternating variable minimization with multiplier (PRAVMM) method based on the PAVMM method, and the scheme is 8 (k+1) n (k) (k) o > xˆ = argmin ye(x, y , l ) , > > x(k+1) = vxˆ(k+1) + (1 − v)x(k), <> (k+1) n (k+1) (k+ 1 ) o yˆ = argmin eLb(x , y, l 2 ) , (9) > > (k+1) (k+1) (k) > y = tyˆ + (1 − t)y , > (k+ ) (k+ 1 ) − − (k+ ) (k+ ) : l 1 = l 2 − aQ 1W 1(Ax 1 + By 1 − b) where v, t, and a are positive constants. As above, we also rewrite the PRAVMM into PRADMM. In order to achieve acceleration, it is obvious that the PRADMM adds two relaxation parameters to the iterative process.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    15 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us