
Online Dictionary Learning for Sparse Coding Julien Mairal [email protected] Francis Bach [email protected] INRIA,1 45 rue d’Ulm 75005 Paris, France Jean Ponce [email protected] Ecole Normale Superieure,´ 1 45 rue d’Ulm 75005 Paris, France Guillermo Sapiro [email protected] University of Minnesota - Department of Electrical and Computer Engineering, 200 Union Street SE, Minneapolis, USA Abstract like decompositions based on principal component analy- Sparse coding—that is, modelling data vectors as sis and its variants, these models do not impose that the sparse linear combinations of basis elements—is basis vectors be orthogonal, allowing more flexibility to widely used in machine learning, neuroscience, adapt the representation to the data. While learning the signal processing, and statistics. This paper fo- dictionary has proven to be critical to achieve (or improve cuses on learning the basis set, also called dic- upon) state-of-the-art results, effectively solving the cor- tionary, to adapt it to specific data, an approach responding optimization problem is a significant compu- that has recently proven to be very effective for tational challenge, particularly in the context of the large- signal reconstruction and classification in the au- scale datasets involved in image processing tasks, that may dio and image processing domains. This paper include millions of training samples. Addressing this chal- proposes a new online optimization algorithm lenge is the topic of this paper. for dictionary learning, based on stochastic ap- Concretely, consider a signal x in Rm. We say that it ad- proximations, which scales up gracefully to large mits a sparse approximation over a dictionary D in Rm×k, datasets with millions of training samples. A with k columns referred to as atoms, when one can find a proof of convergence is presented, along with linear combination of a “few” atoms from D that is “close” experiments with natural images demonstrating to the signal x. Experiments have shown that modelling a that it leads to faster performance and better dic- signal with such a sparse decomposition (sparse coding) is tionaries than classical batch algorithms for both very effective in many signal processing applications (Chen small and large datasets. et al., 1999). For natural images, predefined dictionaries based on various types of wavelets (Mallat, 1999) have been used for this task. However, learning the dictionary 1. Introduction instead of using off-the-shelf bases has been shown to dra- matically improve signal reconstruction (Elad & Aharon, The linear decomposition of a signal using a few atoms of 2006). Although some of the learned dictionary elements a learned dictionary instead of a predefined one—based on may sometimes “look like” wavelets (or Gabor filters), they wavelets (Mallat, 1999) for example—has recently led to are tuned to the input images or signals, leading to much state-of-the-art results for numerous low-level image pro- better results in practice. cessing tasks such as denoising (Elad & Aharon, 2006) as well as higher-level tasks such as classification (Raina Most recent algorithms for dictionary learning (Olshausen et al., 2007; Mairal et al., 2009), showing that sparse & Field, 1997; Aharon et al., 2006; Lee et al., 2007) learned models are well adapted to natural signals. Un- are second-order iterative batch procedures, accessing the whole training set at each iteration in order to minimize a 1WILLOW Project, Laboratoire d’Informatique de l’Ecole Normale Superieure,´ ENS/INRIA/CNRS UMR 8548. cost function under some constraints. Although they have shown experimentally to be much faster than first-order Appearing in Proceedings of the 26 th International Conference gradient descent methods (Lee et al., 2007), they cannot on Machine Learning, Montreal, Canada, 2009. Copyright 2009 effectively handle very large training sets (Bottou & Bous- by the author(s)/owner(s). quet, 2008), or dynamic training data changing over time, Online Dictionary Learning for Sparse Coding such as video sequences. To address these issues, we pro- should be small if D is “good” at representing the signal x. pose an online approach that processes one element (or a The number of samples n is usually large, whereas the sig- small subset) of the training set at a time. This is particu- nal dimension m is relatively small, for example, m = 100 larly important in the context of image and video process- for 10 10 image patches, and n 100, 000 for typical ing (Protter & Elad, 2009), where it is common to learn image× processing applications. In≥ general, we also have dictionaries adapted to small patches, with training data k n (e.g., k = 200 for n = 100, 000), and each signal that may include several millions of these patches (roughly only≪ uses a few elements of D in its representation. Note one per pixel and per frame). In this setting, online tech- that, in this setting, overcomplete dictionaries with k>m niques based on stochastic approximations are an attractive are allowed. As others (see (Lee et al., 2007) for example), alternative to batch methods (Bottou, 1998). For example, we define l(x, D) as the optimal value of the ℓ1-sparse cod- first-order stochastic gradient descent with projections on ing problem: the constraint set is sometimes used for dictionary learn- △ 1 2 ing (see Aharon and Elad (2008) for instance). We show l(x, D) = min x Dα 2 + λ α 1, (2) α∈Rk in this paper that it is possible to go further and exploit the 2|| − || || || specific structure of sparse coding in the design of an opti- where λ is a regularization parameter.2 This problem is mization procedure dedicated to the problem of dictionary also known as basis pursuit (Chen et al., 1999), or the learning, with low memory consumption and lower compu- Lasso (Tibshirani, 1996). It is well known that the ℓ1 tational cost than classical second-order batch algorithms penalty yields a sparse solution for α, but there is no an- and without the need of explicit learning rate tuning. As alytic link between the value of λ and the corresponding demonstrated by our experiments, the algorithm scales up effective sparsity α 0. To prevent D from being arbitrar- gracefully to large datasets with millions of training sam- ily large (which would|| || lead to arbitrarily small values of α d k ples, and it is usually faster than more standard methods. ), it is common to constrain its columns ( j )j=1 to have an ℓ2 norm less than or equal to one. We will call the 1.1. Contributions convex set of matrices verifying this constraint: C This paper makes three main contributions. △ D Rm×k dT d = s.t. j = 1,...,k, j j 1 . (3) We cast in Section 2 the dictionary learning problem as C { ∈ ∀ ≤ } the• optimization of a smooth nonconvex objective function Note that the problem of minimizing the empirical cost over a convex set, minimizing the (desired) expected cost fn(D) is not convex with respect to D. It can be rewrit- when the training set size goes to infinity. ten as a joint optimization problem with respect to the dic- We propose in Section 3 an iterative online algorithm that tionary D and the coefficients α = [α1,..., αn] of the solves• this problem by efficiently minimizing at each step a sparse decomposition, which is not jointly convex, but con- quadratic surrogate function of the empirical cost over the vex with respect to each of the two variables D and α when set of constraints. This method is shown in Section 4 to the other one is fixed: converge with probability one to a stationary point of the n 1 1 2 cost function. min xi Dαi 2 + λ αi 1 . (4) D∈C,α∈Rk×n n 2|| − || || || As shown experimentally in Section 5, our algorithm is Xi=1 • significantly faster than previous approaches to dictionary A natural approach to solving this problem is to alter- learning on both small and large datasets of natural im- nate between the two variables, minimizing over one while ages. To demonstrate that it is adapted to difficult, large- keeping the other one fixed, as proposed by Lee et al. scale image-processing tasks, we learn a dictionary on a (2007) (see also Aharon et al. (2006), who use ℓ0 rather 12-Megapixel photograph and use it for inpainting. 3 than ℓ1 penalties, for related approaches). Since the computation of α dominates the cost of each iteration, a 2. Problem Statement second-order optimization technique can be used in this case to accurately estimate D at each step when α is fixed. Classical dictionary learning techniques (Olshausen & Field, 1997; Aharon et al., 2006; Lee et al., 2007) consider As pointed out by Bottou and Bousquet (2008), however, m×n a finite training set of signals X = [x1,..., xn] in R one is usually not interested in a perfect minimization of and optimize the empirical cost function 2 m The ℓp norm of a vector x in R is defined, for p ≥ 1, by n x △ m x p 1/p △ 1 || ||p = (Pi=1 | [i]| ) . Following tradition, we denote by f (D) = l(x , D), (1) ||x||0 the number of nonzero elements of the vector x. This “ℓ0” n n i Xi=1 sparsity measure is not a true norm. 3 In our setting, as in (Lee et al., 2007), we use the convex ℓ1 m×k where D in R is the dictionary, each column represent- norm, that has empirically proven to be better behaved in general ing a basis vector, and l is a loss function such that l(x, D) than the ℓ0 pseudo-norm for dictionary learning.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-