Sparse Approximation Via Penalty Decomposition Methods ∗

Sparse Approximation Via Penalty Decomposition Methods ∗

Sparse Approximation via Penalty Decomposition Methods ∗ Zhaosong Luy Yong Zhang z February 19, 2012 Abstract In this paper we consider sparse approximation problems, that is, general l0 minimization prob- lems with the l0-\norm" of a vector being a part of constraints or objective function. In particular, we first study the first-order optimality conditions for these problems. We then propose penalty de- composition (PD) methods for solving them in which a sequence of penalty subproblems are solved by a block coordinate descent (BCD) method. Under some suitable assumptions, we establish that any accumulation point of the sequence generated by the PD methods satisfies the first-order opti- mality conditions of the problems. Furthermore, for the problems in which the l0 part is the only nonconvex part, we show that such an accumulation point is a local minimizer of the problems. In addition, we show that any accumulation point of the sequence generated by the BCD method is a saddle point of the penalty subproblem. Moreover, for the problems in which the l0 part is the only nonconvex part, we establish that such an accumulation point is a local minimizer of the penalty subproblem. Finally, we test the performance of our PD methods by applying them to sparse logistic regression, sparse inverse covariance selection, and compressed sensing problems. The computational results demonstrate that our methods generally outperform the existing methods in terms of solution quality and/or speed. Key words: l0 minimization, penalty decomposition methods, block coordinate descent method, compressed sensing, sparse logistic regression, sparse inverse covariance selection 1 Introduction Nowadays, there are numerous applications in which sparse solutions are concerned. For example, in compressed sensing, a large sparse signal is decoded by using a relatively small number of linear mea- surements, which can be formulated as finding a sparse solution to a system of linear equalities and/or inequalities. The similar ideas have also been widely used in linear regression. Recently, sparse inverse covariance selection becomes an important tool in discovering the conditional independence in graph- ical models. One popular approach for sparse inverse covariance selection is to find an approximate sparse inverse covariance while maximizing the log-likelihood (see, for example, [16]). Similarly, sparse logistic regression has been proposed as a promising method for feature selection in classification prob- lems in which a sparse solution is sought to minimize the average logistic loss (see, for example, [41]). ∗This work was supported in part by NSERC Discovery Grant. yDepartment of Mathematics, Simon Fraser University, Burnaby, BC, V5A 1S6, Canada. (email: [email protected]). zDepartment of Mathematics, Simon Fraser University, Burnaby, BC, V5A 1S6, Canada. (email: [email protected]). 1 Mathematically, all these applications can be formulated into the following l0 minimization problems: minff(x): g(x) ≤ 0; h(x) = 0; kxJ k0 ≤ rg; (1) x2X minff(x) + νkxJ k0 : g(x) ≤ 0; h(x) = 0g (2) x2X for some integer r ≥ 0 and ν ≥ 0 controlling the sparsity of the solution, where X is a closed convex set in the n-dimensional Euclidean space <n, f : <n ! <, g : <n ! <m and h : <n ! <p are continuously differentiable functions, and kxJ k0 denotes the cardinality of the subvector formed by the entries of x indexed by J. Some algorithms are proposed for solving special cases of these problems. For example, the iterative hard thresholding algorithms [26, 5, 6] and matching pursuit algorithms [38, 52] are developed for solving the l0-regularized least squares problems arising in compressed sensing, but they cannot be applied to the general l0 minimization problems (1) and (2). In the literature, one popular approach for dealing with (1) and (2) is to replace k · k0 by the l1-norm k · k1 and solve the resulting relaxation problems instead (see, for example, [14, 41, 10, 51]). For some applications such as compressed sensing, it has been shown in [8] that under some suitable assumptions this approach is capable of solving (1) and (2). Recently, another relaxation approach has been proposed to solve problems (1) and (2) in which k · k0 is replaced by lp-\norm" k · kp for some p 2 (0; 1) (see, for example, [9, 11, 12]). In general, it is not clear about the solution quality of these approaches. Indeed, for the example given in the Appendix, the lp relaxation approach for p 2 (0; 1] fails to recover the sparse solution. In this paper we propose penalty decomposition (PD) methods for solving problems (1) and (2) in which a sequence of penalty subproblems are solved by a block coordinate descent (BCD) method. Under some suitable assumptions, we establish that any accumulation point of the sequence generated by the PD method satisfies the first-order optimality conditions of (1) and (2). Furthermore, when h's are affine, and f and g's are convex, we show that such an accumulation point is a local minimizer of the problems. In addition, we show that any accumulation point of the sequence generated by the BCD method is a saddle point of the penalty subproblem. Moreover, when h's are affine, and f and g's are convex, we establish that such an accumulation point is a local minimizer of the penalty subproblem. Finally, we test the performance of our PD methods by applying them to sparse logistic regression, sparse inverse covariance selection, and compressed sensing problems. The computational results demonstrate that our methods generally outperform the existing methods in terms of solution quality and/or speed. The rest of this paper is organized as follows. In Subsection 1.1, we introduce the notation that is used throughout the paper. In Section 2, we establish the first-order optimality conditions for general l0 minimization problems. In Section 3, we study a class of special l0 minimization problems. We develop the PD methods for general l0 minimization problems in Section 4 and establish some convergence results for them. In Section 5, we conduct numerical experiments to test the performance of our PD methods for solving sparse logistic regression, sparse inverse covariance selection, and compressed sensing problems. Finally, we present some concluding remarks in section 6. 1.1 Notation n n In this paper, the symbols < and <+ denote the n-dimensional Euclidean space and the nonnegative orthant of <n, respectively. Given a vector v 2 <n, the nonnegative part of v is denoted by v+ = max(v; 0), where the maximization operates entry-wise. For any real vector, k · k0 and k · k denote the cardinality (i.e., the number of nonzero entries) and the Euclidean norm of the vector, respectively. 2 Given an index set L ⊆ f1; : : : ; ng, jLj denotes the size of L, and the elements of L are denoted by L(1);:::;L(jLj), which are always arranged in ascending order. xL denotes the subvector formed by the entries of x indexed by L. Likewise, XL denotes the submatrix formed by the columns of X indexed by L. In addition, For any two sets A and B, the subtraction of A and B is given by A n B = fx 2 A : x2 = Bg. n Given a closed set C ⊆ < , let NC (x) and TC (x) denote the normal and tangent cones of C at any x 2 C, respectively. The space of all m × n matrices with real entries is denoted by <m×n, and the space of symmetric n × n matrices is be denoted by Sn. We denote by I the identity matrix, whose dimension should be clear from the context. If X 2 Sn is positive semidefinite (resp., definite), we write n X 0 (resp., X 0). The cone of positive semidefinite (resp., definite) matrices is denoted by S+ n (resp., S++). D is an operator which maps a vector to a diagonal matrix whose diagonal consists of the vector. Given an n × n matrix X, De(X) denotes a diagonal matrix whose ith diagonal element is Xii for i = 1; : : : ; n. 2 First-order optimality conditions In this section we study the first-order optimality conditions for problems (1) and (2). In particular, we first discuss the first-order necessary conditions for them. Then we study the first-order sufficient conditions for them when the l0 part is the only nonconvex part. We now establish the first-order necessary optimality conditions for problems (1) and (2). Theorem 2.1 Assume that x∗ is a local minimizer of problem (1). Let J ∗ ⊆ J be an index set with ∗ ∗ ¯∗ ¯∗ ∗ jJ j = r such that xj = 0 for all j 2 J , where J = J n J . Suppose that the following Robinson condition 82 g0(x∗)d − v 3 9 < 0 ∗ ∗ m ∗ = m p jJ|−r 4 h (x )d 5 : d 2 TX (x ); v 2 < ; vi ≤ 0; i 2 A(x ) = < × < × < (3) : T ; (IJ¯∗ ) d 0 ∗ 0 ∗ holds, where g (x ) and h (x ) denote the Jacobian of the functions g = (g1; : : : ; gm) and h = (h1; : : : ; hp) at x∗, respectively, and ∗ ∗ A(x ) = f1 ≤ i ≤ m : gi(x ) = 0g: (4) Then, there exists (λ∗; µ∗; z∗) 2 <m × <p × <n together with x∗ satisfying ∗ ∗ ∗ ∗ ∗ ∗ ∗ −∇f(x ) − rg(x )λ − rh(x )µ − z 2 NX (x ); (5) ∗ ∗ ∗ ∗ ¯ ∗ λi ≥ 0; λi gi(x ) = 0; i = 1; : : : ; m; zj = 0; j 2 J [ J : where J¯ is the complement of J in f1; : : : ; ng. Proof. By the assumption that x∗ is a local minimizer of problem (1), one can observe that x∗ is also a local minimizer of the following problem: minff(x): g(x) ≤ 0; h(x) = 0; xJ¯∗ = 0g: (6) x2X Using this observation, (3) and Theorem 3.25 of [47], we see that the conclusion holds.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    31 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us