Computational Methods for Sparse Solution of Linear Inverse Problems Joel A
Total Page:16
File Type:pdf, Size:1020Kb
PROCEEDINGS OF THE IEEE. SPECIAL ISSUE ON APPLICATIONS OF SPARSE REPRESENTATION AND COMPRESSIVE SENSING, VOL. XX, NO. YY, MONTH 20091 Computational Methods for Sparse Solution of Linear Inverse Problems Joel A. Tropp, Member, IEEE and Stephen J. Wright Abstract—The goal of the sparse approximation problem is find good solutions to large sparse approximation problems in to approximate a target signal using a linear combination of a reasonable time. few elementary signals drawn from a fixed collection. This paper In this paper, we give an overview of algorithms for sparse surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circum- approximation, describing their computational requirements stances in which individual methods tend to perform well, and to and the relationships between them. We also discuss the the theoretical guarantees available. Many fundamental questions types of problems for which each method is most effective in electrical engineering, statistics, and applied mathematics in practice. Finally, we sketch the theoretical results that can be posed as sparse approximation problems, making these justify the application of these algorithms. Although low- algorithms versatile and relevant to a plethora of applications. rank regularization also falls within the sparse approximation Index Terms—Sparse Approximation, Compressed Sensing, framework, the algorithms we describe do not apply directly Matching Pursuit, Convex Optimization to this class of problems. Subsection I-A describes “ideal” formulations of sparse I. INTRODUCTION approximation problems and some common features of algo- INEAR inverse problems arise throughout engineering rithms that attempt to solve these problems. Section II provides L and the mathematical sciences. In most applications, additional detail about greedy pursuit methods. Section III these problems are ill-conditioned or underdetermined, so one presents formulations based on convex programming and must apply additional regularizing constraints in order to ob- algorithms for solving these optimization problems. tain interesting or useful solutions. Over the last two decades, sparsity constraints have emerged as a fundamental type of A. Formulations regularizer. This approach seeks an approximate solution to m×N a linear system while requiring that the unknown has few Suppose that Φ 2 R is a real matrix whose columns nonzero entries relative to its dimension: have unit Euclidean norm: k'jk2 = 1 for j = 1; 2;:::;N. (The normalization does not compromise generality.) This Find sparse x such that Φx ≈ u; matrix is often referred to as a dictionary. The columns of the where u is a target signal and Φ is a known matrix. matrix are “entries” in the dictionary, and a column submatrix sparse approx- is called a subdictionary. Generically, this formulation is referred to as N imation [1]. These problems arise in many areas, including The counting function k · k0 : R ! R returns the number statistics, signal processing, machine learning, coding theory, of nonzero components in its argument. We say that a vector x and approximation theory. Compressive sampling refers to a is s-sparse when kxk0 ≤ s. When u = Φx, we refer to x as specific type of sparse approximation problem first studied a representation of the signal u with respect to the dictionary. in [2], [3]. In practice, signals tend to be compressible, rather than Tykhonov regularization, the classical device for solving sparse. Mathematically, a compressible signal has a repre- linear inverse problems, controls the energy (i.e., the Euclidean sentation whose entries decay rapidly when sorted in order norm) of the unknown vector. This approach leads to a linear of decreasing magnitude. Compressible signals are well ap- least-squares problem whose solution is generally nonsparse. proximated by sparse signals, so the sparse approximation To obtain sparse solutions, we must develop more sophisti- framework applies to this class. In practice, it is usually cated algorithms and—often—commit more computational re- more challenging to identify approximate representations of sources. The effort pays off. Recent research has demonstrated compressible signals than of sparse signals. that, in many cases of interest, there are algorithms that can The most basic problem we consider is to produce a maximally sparse representation of an observed signal u: JAT is with Applied & Computational Mathematics, Firestone Laboratories MC 217-50, California Institute of Technology, 1200 E. California Blvd., min kxk0 subject to Φx = u: (1) Pasadena, CA 91125-5000 USA. E-mail: [email protected]. x SJW is with the Computer Sciences Department, University of Wis- consin, 1210 W. Dayton St., Madison, WI 53706 USA. E-mail: One natural variation is to relax the equality constraint to allow [email protected] some error tolerance " ≥ 0, in case the observed signal is JAT was supported by ONR N00014-08-1-2065. SJW was supported by contaminated with noise: NSF CCF-0430504, DMS-0427689, CTS-0456694, CNS-0540147, and DMS- 0914524. min kxk0 subject to kΦx − uk2 ≤ ": (2) Manuscript received March 15, 2009. Revised February 17, 2010. x PROCEEDINGS OF THE IEEE. SPECIAL ISSUE ON APPLICATIONS OF SPARSE REPRESENTATION AND COMPRESSIVE SENSING, VOL. XX, NO. YY, MONTH 20092 It is most common to measure the prediction–observation a maximum a posteriori estimator that incorporates the discrepancy with the Euclidean norm, but other loss functions observation. Identify a region of significant posterior may also be appropriate. mass [8] or average over most-probable models [9]. The elements of (2) can be combined in several ways to 4) Nonconvex optimization. Relax the `0 problem to a obtain related problems. For example, we can seek the minimal related nonconvex problem and attempt to identify a error possible at a given level of sparsity s ≥ 1: stationary point [10]. 5) Brute force. Search through all possible support sets, min kΦx − uk2 subject to kxk0 ≤ s: (3) x possibly using cutting-plane methods to reduce the num- We can also use a parameter λ > 0 to balance the twin ber of possibilities [11, Sec. 3.7–3.8]. objectives of minimizing both error and sparsity: This article focuses on greedy pursuits and convex optimiza- tion. These two approaches are computationally practical and 1 2 min kΦx − uk2 + λkxk0: (4) lead to provably correct solutions under well-defined condi- x 2 tions. Bayesian methods and nonconvex optimization are based If there are no restrictions on the dictionary Φ and the on sound principles, but they do not currently offer theoretical signal u, then sparse approximation is at least as hard as guarantees. Brute force is, of course, algorithmically correct, a general constraint satisfaction problem. Indeed, for fixed but it remains plausible only for small-scale problems. constants C; K ≥ 1, it is NP-hard to produce a (Cs)-sparse Recently, we have also seen interest in heuristic algorithms approximation whose error lies within a factor K of the based on belief-propagation and message-passing techniques minimal s-term approximation error [4, Sec. 0.8.2]. developed in the graphical models and coding theory commu- Nevertheless, over the past decade, researchers have identi- nities [12], [13]. fied many interesting classes of sparse approximation prob- lems that submit to computationally tractable algorithms. C. Verifying Correctness These striking results help to explain why sparse approxima- tion has been such an important and popular topic of research Researchers have identified several tools that can be used in recent years. to prove that sparse approximation algorithms produce optimal In practice, sparse approximation algorithms tend to be solutions to sparse approximation problems. These tools also slow unless the dictionary Φ admits a fast matrix–vector provide insight into the efficiency of computational algorithms, multiply. Let us mention two classes of sparse approximation so the theoretical background merits a summary. problems where this property holds. First, many naturally The uniqueness of sparse representations is equivalent to an occurring signals are compressible with respect to dictionaries algebraic condition on submatrices of Φ. Suppose a signal u constructed using principles of harmonic analysis [5] (e.g., has two different s-sparse representations x1 and x2. Clearly, wavelet coefficients of natural images). This type of structured u = Φx = Φx =) Φ(x − x ) = 0: dictionary often comes with a fast transformation algorithm. 1 2 1 2 Second, in compressive sampling, we typically view Φ as In words, Φ maps a nontrivial (2s)-sparse signal to zero. It the product of a random observation matrix and a fixed follows that each s-sparse representation is unique if and only orthogonal matrix that determines a basis in which the signal if each (2s)-column submatrix of Φ is injective. is sparse. Again, fast multiplication is possible when both the To ensure that sparse approximation is computationally observation matrix and sparsity basis are structured. tractable, we need stronger assumptions on Φ. Not only should Recently, there have been substantial efforts to incorporate sparse signals be uniquely determined, but they should be sta- more sophisticated signal constraints into sparsity models. In bly determined. Consider a signal perturbation ∆u and an s- particular, Baraniuk et al. have studied model-based compress- sparse coefficient perturbation ∆x, related by ∆u = Φ(∆x). ive sampling algorithms, which