Multi-Dimensional Sparse Structured Signal Approximation Using Split

Total Page:16

File Type:pdf, Size:1020Kb

Multi-Dimensional Sparse Structured Signal Approximation Using Split Multi-dimensional sparse structured signal approximation using split bregman iterations Yoann Isaac, Quentin Barthélemy, Cedric Gouy-Pailler, Jamal Atif, Michèle Sebag To cite this version: Yoann Isaac, Quentin Barthélemy, Cedric Gouy-Pailler, Jamal Atif, Michèle Sebag. Multi-dimensional sparse structured signal approximation using split bregman iterations. ICASSP 2013 - 38th IEEE International Conference on Acoustics, Speech and Signal Processing, May 2013, Vancouver, Canada. pp.3826-3830. hal-00862645 HAL Id: hal-00862645 https://hal.archives-ouvertes.fr/hal-00862645 Submitted on 30 Apr 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. MULTI-DIMENSIONAL SPARSE STRUCTURED SIGNAL APPROXIMATION USING SPLIT BREGMAN ITERATIONS Yoann Isaac 1,2, Quentin Barth elemy´ 1, Jamal Atif 2, C edric´ Gouy-Pailler 1, Mich ele` Sebag2 1 CEA, LIST 2 TAO, CNRS − INRIA − LRI Data Analysis Tools Laboratory Universit e´ Paris-Sud 91191 Gif-sur-Yvette CEDEX, FRANCE 91405 Orsay, FRANCE ABSTRACT structured decomposition property is enforced through adding a total variation (TV) penalty to the minimization objective. The paper focuses on the sparse approximation of signals In the 1D case, the minimization of the above overall ob- using overcomplete representations, such that it preserves the jective can be tackled using the fused-LASSO approach first (prior) structure of multi-dimensional signals. The underlying introduced in [5]. In the case of multi-dimensional signals 1 optimization problem is tackled using a multi-dimensional however, the minimization problem presents additional dif- extension of the split Bregman optimization approach. An ficulties. The first contribution of the paper is to show how extensive empirical evaluation shows how the proposed ap- this problem can be handled efficiently, by extending the (1D) proach compares to the state of the art depending on the split Bregman fused-LASSO solver presented in [6], to the signal features. multi-dimensional case. The second contribution is a com- prehensive experimental study, comparing state-of-the-art al- Index Terms — Sparse approximation, Regularization, gorithms to the presented approach referred to as Multi-SSSA Fused-LASSO, Split Bregman, Multidimensional signals and establishing their relative performance depending on di- verse features of the structured signals. The section 2 introduces the formal background. The pro- 1. INTRODUCTION posed optimization approach is described in section 3.1. Sec- tion 4 presents our experimental setting and reports on the re- Dictionary-based representations proceed by approximating sults. The presented approach is discussed w.r.t. related work a signal via a linear combination of dictionary elements, re- in section 5 and the paper concludes with some perspectives ferred to as atoms. Sparse dictionary-based representations, for further research. where each signal involves but a few atoms, have been thor- oughly investigated for 1D and 2D signals for their good prop- erties, as they enable robust transmission (compressed sens- ing [1]) or image in-painting [2]. The dictionary is either 2. PROBLEM STATEMENT given, based on the domain knowledge, or learned from the RC×T signals [3]. Let Y = [ y1,..., yT ] ∈ be a matrix made of T C- RC×N The so-called sparse approximation algorithm aims at dimensional signals, Φ ∈ an overcomplete dictionary finding a sparse approximate representation of the considered of N atoms (N > C ). We consider the linear model signals using this dictionary, by minimizing a weighted sum of the approximation loss and the representation sparsity (see yt = Φ xt + et, t ∈ 1,...,T , (1) [4] for a survey). When available, prior knowledge about N×T the application domain can also be used to guide the search in which X = [ x1,..., xT ] ∈ R stands for the decompo- C×T toward “plausible” decompositions. sition matrix and E = [ e1,..., eT ] ∈ R is a zero-mean Gaussian noise matrix. This paper focuses on sparse approximation enforcing a The sparse structured decomposition problem consists of ap- structured decomposition property, defined as follows. Let proximating the yi, i ∈ { 1,...,T } by decomposing them on the signals be structured (e.g. being recorded in consecutive the dictionary Φ, such that the structure of the decompositions time steps); the structured decomposition property then re- xi reflects that of the signals yi. This goal is formalized as quires that the signal structure is preserved in the dictionary- based representation (e.g. the atoms involved in the approx- 1Our motivating application considers electro-encephalogram (EEG) sig- imation of consecutive signals have “close” weights). The nals, where the number of sensors ranges up to a few hundreds. the minimization2 of the objective function Only few iterations of this system are necessary for conver- 2 gence. In our implementation, this update is only performed min kY − ΦXk2 + λ1kXk1 + λ2kXP k1 , (2) X once at each iteration of the global optimization algorithm. where λ1 and λ2 are regularization coefficients and P encodes the signal structure (provided by the prior knowledge) as in Eq.(9), Eq.(10) could be resolved with the soft-thresholding [7]. In the remainder of the paper, the considered structure is operator that of the temporal ordering of the signals, i.e. kXP k1 = i+1 i+1 i A = SoftThreshold λ1 (X + D ) (11) T k.k1 A t=2 kXt − Xt−1k1. µ1 P i+1 i+1 i B λ2 X P D . (12) = SoftThreshold 1 ( + B) µ1 k.k 3. OPTIMIZATION STRATEGY Solving Eq.(8) requires the minimization of a convex differ- 3.1. Algorithm description entiable function which can be performed via classical opti- mization methods. We proposed here to solve it determin- Bregman iterations have shown to be very efficient for ℓ1 reg- ularized problem [8]. For convex problems with linear con- istically. The main difficulty in extending [6] to the multi- straints, the split Bregman iteration technique is equivalent dimensional signals case rely on this step. Let us define H to the method of multipliers and the augmented Lagrangian from Eq.(8) such as one [9]. The iteration scheme presented in [6] considers an +1 Xi = argmin H(X) . (13) augmented Lagrangian formalism. We have chosen here to X present ours with the initial split Bregman formulation. Differentiating this expression with respect to X yields First, let us restate the sparse approximation problem d T T H = (2Φ Φ + µ1I)X + X(µ2P P ) − 2Φ Y (14) 2 dX min kY − ΦXk + λ1kAk1 + λ2kBk1 X 2 i i i i T s.t. A = X . (3) +µ1(DA − A ) + µ2(DB − B )P , (15) B = XP where I is the identity matrix. The minimum Xˆ = Xi+1 This reformulation is a key step of the split Bregman method. d ˆ of Eq.(8) is obtained by solving dX H(X) = 0 which is a It decouples the three terms and allows to optimize them sep- Sylvester equation arately within the Bregman iterations. To set-up this iteration scheme, Eq.(3) must be transform to an unconstrained prob- W Xˆ + XZˆ = Ci , (16) lem T T i 2 with W = 2Φ Φ + µ1I, Z = µ2P P and C = −U + min X,A,B kY − ΦXk2 + λ1kAk1 + λ2kBk1 i i i T µ1 µ2 . (4) 2Φ Y + µ1A + ( µ2B − V )P . Fortunately, in our case, W + kX − Ak2 + kXP − Bk2 2 2 2 2 and Z are real symmetric matrix. Thus, they can be diagonal- The split Bregman iterations could then be expressed as [8] ized as follow: i+1 i+1 i+1 2 (X , A , B ) = argmin X,A,B kY − ΦXk2 T W = F D wF (17) +λ1kAk1 + λ2kBk1 (5) T Z = GD zG (18) µ1 i 2 + 2 kX − A + DAk2 µ2 i 2 and Eq.(16) can then be rewritten + 2 kXP − B + DBk2 i+1 +1 +1 D Di Xi Ai (6) ′ ′ i′ A = A + ( − ) DwXˆ + Xˆ Dz = C , (19) i+1 i i+1 i+1 DB = DB + ( X P − B ) (7) with Xˆ ′ = F T XGˆ and Ci′ = F T CiG. Xˆ ′ is then obtained Thanks to the split of the three terms realized above, the min- by imization of Eq.(5) could be realized iteratively by alterna- ′ −1 i′ tively updating variables in the system ∀s ∈ { 1,...,S } Xˆ (: , s ) = ( Dw + Dz(s, s )I) C (: , s ) Xi+1 =argmin kY − ΦXk2 + µ1 kX − Ai + Di k2 X 2 2 A 2 where the notation (: , s ) indices the column s of matrices. µ2 i i 2 + 2 kXP − B + DBk2 (8) Going back to Xˆ could be performed with: Xˆ = F Xˆ ′GT . i+1 µ1 i+1 i 2 A =argmin A λ1kAk1 + 2 kX − A + DAk2 (9) W and Z being independent of the iteration ( i) considered, i+1 µ2 i+1 i 2 theirs diagonalization is done only once and for all as well B =argmin B λ2kBk1 + 2 kX P − B + DBk2(10) −1 as the computation of the terms (Dw + Dz(s, s )I) ∀s ∈ 1 2 p p {1,...,S }. Thus, this update does not require heavy compu- kAkp = ( Pi Pj |Ai,j | ) . The case p = 2 corresponds to the clas- sical Frobenius norm tation. The full algorithm is summarized below. 3.2. Algorithm sum up activity and d it’s duration. A decomposition matrix X could then be written: 1: Input: Y , Φ, P 2: Parameters: λ1, λ2, µ1, µ2, ǫ, iterMax , kMax M 0 0 0 3: Init DA, DB and X 0 0 0 0 T T X = aiPind i,m i,d i (21) 4: A = X P , B = X , Y = 2Φ Φ + µ1I, Z = µ2P P X i=1 5: Compute Dw, Dz, F and G.
Recommended publications
  • Accelerating Matching Pursuit for Multiple Time-Frequency Dictionaries
    Proceedings of the 23rd International Conference on Digital Audio Effects (DAFx-20), Vienna, Austria, September 8–12, 2020 ACCELERATING MATCHING PURSUIT FOR MULTIPLE TIME-FREQUENCY DICTIONARIES ZdenˇekPr˚uša,Nicki Holighaus and Peter Balazs ∗ Acoustics Research Institute Austrian Academy of Sciences Vienna, Austria [email protected],[email protected],[email protected] ABSTRACT An overview of greedy algorithms, a class of algorithms MP Matching pursuit (MP) algorithms are widely used greedy meth- falls under, can be found in [10, 11] and in the context of audio ods to find K-sparse signal approximations in redundant dictionar- and music processing in [12, 13, 14]. Notable applications of MP ies. We present an acceleration technique and an implementation algorithms in the audio domain include analysis [15], [16], coding of the matching pursuit algorithm acting on a multi-Gabor dictio- [17, 18, 19], time scaling/pitch shifting [20] [21], source separation nary, i.e., a concatenation of several Gabor-type time-frequency [22], denoising [23], partial and harmonic detection and tracking dictionaries, consisting of translations and modulations of possi- [24]. bly different windows, time- and frequency-shift parameters. The We present a method for accelerating MP-based algorithms proposed acceleration is based on pre-computing and thresholding acting on a single Gabor-type time-frequency dictionary or on a inner products between atoms and on updating the residual directly concatenation of several Gabor dictionaries with possibly different in the coefficient domain, i.e., without the round-trip to thesig- windows and parameters. The main idea of the present accelera- nal domain.
    [Show full text]
  • Paper, We Present Data Fusion Across Multiple Signal Sources
    68 IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 51, NO. 1, JANUARY 2016 A Configurable 12–237 kS/s 12.8 mW Sparse-Approximation Engine for Mobile Data Aggregation of Compressively Sampled Physiological Signals Fengbo Ren, Member, IEEE, and Dejan Markovic,´ Member, IEEE Abstract—Compressive sensing (CS) is a promising technology framework, the CS framework has several intrinsic advantages. for realizing low-power and cost-effective wireless sensor nodes First, random encoding is a universal compression method that (WSNs) in pervasive health systems for 24/7 health monitoring. can effectively apply to all compressible signals regardless of Due to the high computational complexity (CC) of the recon- struction algorithms, software solutions cannot fulfill the energy what their sparse domain is. This is a desirable merit for the efficiency needs for real-time processing. In this paper, we present data fusion across multiple signal sources. Second, sampling a 12—237 kS/s 12.8 mW sparse-approximation (SA) engine chip and compression can be performed at the same stage in CS, that enables the energy-efficient data aggregation of compressively allowing for a sampling rate that is significantly lower than the sampled physiological signals on mobile platforms. The SA engine Nyquist rate. Therefore, CS has a potential to greatly impact chip integrated in 40 nm CMOS can support the simultaneous reconstruction of over 200 channels of physiological signals while the data acquisition devices that are sensitive to cost, energy consuming <1% of a smartphone’s power budget. Such energy- consumption, and portability, such as wireless sensor nodes efficient reconstruction enables two-to-three times energy saving (WSNs) in mobile and wearable applications [5].
    [Show full text]
  • Improved Greedy Algorithms for Sparse Approximation of a Matrix in Terms of Another Matrix
    Improved Greedy Algorithms for Sparse Approximation of a Matrix in terms of Another Matrix Crystal Maung Haim Schweitzer Department of Computer Science Department of Computer Science The University of Texas at Dallas The University of Texas at Dallas Abstract We consider simultaneously approximating all the columns of a data matrix in terms of few selected columns of another matrix that is sometimes called “the dic- tionary”. The challenge is to determine a small subset of the dictionary columns that can be used to obtain an accurate prediction of the entire data matrix. Previ- ously proposed greedy algorithms for this task compare each data column with all dictionary columns, resulting in algorithms that may be too slow when both the data matrix and the dictionary matrix are large. A previously proposed approach for accelerating the run time requires large amounts of memory to keep temporary values during the run of the algorithm. We propose two new algorithms that can be used even when both the data matrix and the dictionary matrix are large. The first algorithm is exact, with output identical to some previously proposed greedy algorithms. It takes significantly less memory when compared to the current state- of-the-art, and runs much faster when the dictionary matrix is sparse. The second algorithm uses a low rank approximation to the data matrix to further improve the run time. The algorithms are based on new recursive formulas for computing the greedy selection criterion. The formulas enable decoupling most of the compu- tations related to the data matrix from the computations related to the dictionary matrix.
    [Show full text]
  • Privacy Preserving Identification Using Sparse Approximation With
    Privacy Preserving Identification Using Sparse Approximation with Ambiguization Behrooz Razeghi, Slava Voloshynovskiy, Dimche Kostadinov and Olga Taran Stochastic Information Processing Group, Department of Computer Science, University of Geneva, Switzerland behrooz.razeghi, svolos, dimche.kostadinov, olga.taran @unige.ch f g Abstract—In this paper, we consider a privacy preserving en- Owner Encoder Public Storage coding framework for identification applications covering biomet- +1 N M λx λx L M rics, physical object security and the Internet of Things (IoT). The X × − A × ∈ X 1 ∈ A proposed framework is based on a sparsifying transform, which − X = x (1) , ..., x (m) , ..., x (M) a (m) = T (Wx (m)) n A = a (1) , ..., a (m) , ..., a (M) consists of a trained linear map, an element-wise nonlinearity, { } λx { } and privacy amplification. The sparsifying transform and privacy L p(y (m) x (m)) Encoder | amplification are not symmetric for the data owner and data user. b Schematic Decoding List Probe +1 We demonstrate that the proposed approach is closely related (Private Decoding) y = x (m) + z d (a (m) , b) γL λy λy ≤ − to sparse ternary codes (STC), a recent information-theoretic 1 p (positions) − ´x y = ´x (Pubic Decoding) 1 m M (y) concept proposed for fast approximate nearest neighbor (ANN) ≤ ≤ L Data User b = Tλy (Wy) search in high dimensional feature spaces that being machine learning in nature also offers significant benefits in comparison Fig. 1: Block diagram of the proposed model. to sparse approximation and binary embedding approaches. We demonstrate that the privacy of the database outsourced to a for example biometrics, which being disclosed once, do not server as well as the privacy of the data user are preserved at a represent any more a value for the related security applications.
    [Show full text]
  • Column Subset Selection Via Sparse Approximation of SVD
    Column Subset Selection via Sparse Approximation of SVD A.C¸ivrila,∗, M.Magdon-Ismailb aMeliksah University, Computer Engineering Department, Talas, Kayseri 38280 Turkey bRensselaer Polytechnic Institute, Computer Science Department, 110 8th Street Troy, NY 12180-3590 USA Abstract Given a real matrix A 2 Rm×n of rank r, and an integer k < r, the sum of the outer products of top k singular vectors scaled by the corresponding singular values provide the best rank-k approximation Ak to A. When the columns of A have specific meaning, it might be desirable to find good approximations to Ak which use a small number of columns of A. This paper provides a simple greedy algorithm for this problem in Frobenius norm, with guarantees on( the performance) and the number of columns chosen. The algorithm ~ k log k 2 selects c columns from A with c = O ϵ2 η (A) such that k − k ≤ k − k A ΠC A F (1 + ϵ) A Ak F ; where C is the matrix composed of the c columns, ΠC is the matrix projecting the columns of A onto the space spanned by C and η(A) is a measure related to the coherence in the normalized columns of A. The algorithm is quite intuitive and is obtained by combining a greedy solution to the generalization of the well known sparse approximation problem and an existence result on the possibility of sparse approximation. We provide empirical results on various specially constructed matrices comparing our algorithm with the previous deterministic approaches based on QR factorizations and a recently proposed randomized algorithm.
    [Show full text]
  • Modified Sparse Approximate Inverses (MSPAI) for Parallel
    Technische Universit¨atM¨unchen Zentrum Mathematik Modified Sparse Approximate Inverses (MSPAI) for Parallel Preconditioning Alexander Kallischko Vollst¨andiger Abdruck der von der Fakult¨atf¨ur Mathematik der Technischen Universit¨at M¨unchen zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften (Dr. rer. nat.) genehmigten Dissertation. Vorsitzender: Univ.-Prof. Dr. Peter Rentrop Pr¨ufer der Dissertation: 1. Univ.-Prof. Dr. Thomas Huckle 2. Univ.-Prof. Dr. Bernd Simeon 3. Prof. Dr. Matthias Bollh¨ofer, Technische Universit¨atCarolo-Wilhelmina zu Braunschweig (schriftliche Beurteilung) Die Dissertation wurde am 15.11.2007 bei der Technischen Universit¨ateingereicht und durch die Fakult¨atf¨urMathematik am 18.2.2008 angenommen. ii iii Abstract The solution of large sparse and ill-conditioned systems of linear equations is a central task in numerical linear algebra. Such systems arise from many applications like the discretiza- tion of partial differential equations or image restoration. Herefore, Gaussian elimination or other classical direct solvers can not be used since the dimension of the underlying co- 3 efficient matrices is too large and Gaussian elimination is an O n algorithm. Iterative solvers techniques are an effective remedy for this problem. They allow to exploit sparsity, bandedness, or block structures, and they can be parallelized much easier. However, due to the matrix being ill-conditioned, convergence becomes very slow or even not be guaranteed at all. Therefore, we have to employ a preconditioner. The sparse approximate inverse (SPAI) preconditioner is based on Frobenius norm mini- mization. It is a well-established preconditioner, since it is robust, flexible, and inherently parallel. Moreover, SPAI captures meaningful sparsity patterns automatically.
    [Show full text]
  • A New Algorithm for Non-Negative Sparse Approximation Nicholas Schachter
    A New Algorithm for Non-Negative Sparse Approximation Nicholas Schachter To cite this version: Nicholas Schachter. A New Algorithm for Non-Negative Sparse Approximation. 2020. hal- 02888300v1 HAL Id: hal-02888300 https://hal.archives-ouvertes.fr/hal-02888300v1 Preprint submitted on 2 Jul 2020 (v1), last revised 9 Jun 2021 (v5) HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. A New Algorithm for Non-Negative Sparse Approximation Nicholas Schachter July 2, 2020 Abstract In this article we introduce a new algorithm for non-negative sparse approximation problems based on a combination of the approaches used in orthogonal matching pursuit and basis de-noising pursuit towards solving sparse approximation problems. By taking advantage of structural properties inherent to non-negative sparse approximation problems, a branch and bound (BnB) scheme is developed that enables fast and accurate recovery of underlying dictionary atoms, even in the presence of noise. Detailed analysis of the performance of the algorithm is discussed, with attention specically paid to situations in which the algorithm will perform better or worse based on the properties of the dictionary and the required sparsity of the solution.
    [Show full text]
  • Efficient Implementation of the K-SVD Algorithm Using
    E±cient Implementation of the K-SVD Algorithm using Batch Orthogonal Matching Pursuit Ron Rubinstein¤, Michael Zibulevsky¤ and Michael Elad¤ Abstract The K-SVD algorithm is a highly e®ective method of training overcomplete dic- tionaries for sparse signal representation. In this report we discuss an e±cient im- plementation of this algorithm, which both accelerates it and reduces its memory consumption. The two basic components of our implementation are the replacement of the exact SVD computation with a much quicker approximation, and the use of the Batch-OMP method for performing the sparse-coding operations. - Technical Report CS-2008-08.revised 2008 Batch-OMP, which we also present in this report, is an implementation of the Orthogonal Matching Pursuit (OMP) algorithm which is speci¯cally optimized for sparse-coding large sets of signals over the same dictionary. The Batch-OMP imple- mentation is useful for a variety of sparsity-based techniques which involve coding large numbers of signals. In the report, we discuss the Batch-OMP and K-SVD implementations and analyze their complexities. The report is accompanied by Matlabr toolboxes which implement these techniques, and can be downloaded at http://www.cs.technion.ac.il/~ronrubin/software.html. 1 Introduction Sparsity in overcomplete dictionaries is the basis for a wide variety of highly e®ective signal and image processing techniques. The basic model suggests that natural signals can be e±ciently explained as linear combinations of prespeci¯ed atom signals, where the linear coe±cients are sparse (most of them zero). Formally, if x is a column signal and D is the dictionary (whose columns are the atom signals), the sparsity assumption can be described by the following sparse approximation problem, 2 γ^ = Argmin γ 0 Subject To x Dγ ² : (1.1) γ k k k ¡ k2 · Technion - Computer Science Department In this formulation, γ is the sparse representation of x, ² the error tolerance, and k ¢ k0 is the `0 pseudo-norm which counts the non-zero entries.
    [Show full text]
  • Regularized Dictionary Learning for Sparse Approximation
    16th European Signal Processing Conference (EUSIPCO 2008), Lausanne, Switzerland, August 25-29, 2008, copyright by EURASIP REGULARIZED DICTIONARY LEARNING FOR SPARSE APPROXIMATION M. Yaghoobi, T. Blumensath, M. Davies Institute for Digital Communications, Joint Research Institute for Signal and Image Processing, University of Edinburgh, UK ABSTRACT keeping the dictionary fixed. This is followed by a second step in Sparse signal models approximate signals using a small number of which the sparse coefficients are kept fixed and the dictionary is elements from a large set of vectors, called a dictionary. The suc- optimized. This algorithm runs for a specific number of alternating cess of such methods relies on the dictionary fitting the signal struc- optimizations or until a specific approximation error is reached. The ture. Therefore, the dictionary has to be designed to fit the signal proposed method is based on such an alternating optimization (or class of interest. This paper uses a general formulation that allows block-relaxed optimization) method with some advantages over the the dictionary to be learned form the data with some a priori in- current methods in the condition and speed of convergence. formation about the dictionary. In this formulation a universal cost If the set of training samples is {y(i) : 1 ≤ i ≤ L}, where L function is proposed and practical algorithms are presented to min- is the number of training vectors, then sparse approximations are imize this cost under different constraints on the dictionary. The often found (for all i : 1 ≤ i ≤ L ) by, proposed methods are compared with previous approaches using (i) (i) 2 p synthetic and real data.
    [Show full text]
  • Structured Compressed Sensing - Using Patterns in Sparsity
    Structured Compressed Sensing - Using Patterns in Sparsity Johannes Maly Technische Universit¨atM¨unchen, Department of Mathematics, Chair of Applied Numerical Analysis [email protected] CoSIP Workshop, Berlin, Dezember 9, 2016 Overview Classical Compressed Sensing Structures in Sparsity I - Joint Sparsity Structures in Sparsity II - Union of Subspaces Conclusion Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 2 of 44 Classical Compressed Sensing Overview Classical Compressed Sensing Structures in Sparsity I - Joint Sparsity Structures in Sparsity II - Union of Subspaces Conclusion Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 3 of 44 Classical Compressed Sensing Compressed Sensing N Let x 2 R be some unknown k-sparse signal. Then, x can be recovered from few linear measurements y = A · x m×N m where A 2 R is a (random) matrix, y 2 R is the vector of measurements and m N. Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 4 of 44 Classical Compressed Sensing Compressed Sensing N Let x 2 R be some unknown k-sparse signal. Then, x can be recovered from few linear measurements y = A · x m×N m where A 2 R is a (random) matrix, y 2 R is the vector of measurements and m N. It is sufficient to have N m Ck log & k measurements to recover x (with high probability) by greedy strategies, e.g. Orthogonal Matching Pursuit, or convex optimization, e.g. `1-minimization. Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 5 of 44 OMP INPUT: matrix A; measurement vector y: INIT: T0 = ;; x0 = 0: ITERATION: until stopping criterion is met T jn+1 arg maxj2[N] (A (y − Axn))j ; Tn+1 Tn [ fjn+1g; xn+1 arg minz2RN fky − Azk2; supp(z) ⊂ Tn+1g : OUTPUT: then ~-sparse approximationx ^ := xn~ Classical Compressed Sensing Orthogonal Matching Pursuit OMP is a simple algorithm that tries to find the true support of x by k greedy steps.
    [Show full text]
  • Sparse Estimation for Image and Vision Processing
    Sparse Estimation for Image and Vision Processing Julien Mairal Inria, Grenoble SPARS summer school, Lisbon, December 2017 Julien Mairal Sparse Estimation for Image and Vision Processing 1/187 Course material (freely available on arXiv) J. Mairal, F. Bach and J. Ponce. Sparse Modeling for Image and Vision Processing. Foundations and Trends in Computer Graphics and Vision. 2014. F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Optimization with sparsity-inducing penalties. Foundations and Trends in Machine Learning, 4(1). 2012. Julien Mairal Sparse Estimation for Image and Vision Processing 2/187 Outline 1 A short introduction to parsimony 2 Discovering the structure of natural images 3 Sparse models for image processing 4 Optimization for sparse estimation 5 Application cases Julien Mairal Sparse Estimation for Image and Vision Processing 3/187 Part I: A Short Introduction to Parcimony Julien Mairal Sparse Estimation for Image and Vision Processing 4/187 1 A short introduction to parsimony Early thoughts Sparsity in the statistics literature from the 60’s and 70’s Wavelet thresholding in signal processing from 90’s The modern parsimony and the ℓ1-norm Structured sparsity Compressed sensing and sparse recovery 2 Discovering the structure of natural images 3 Sparse models for image processing 4 Optimization for sparse estimation 5 Application cases Julien Mairal Sparse Estimation for Image and Vision Processing 5/187 Early thoughts (a) Dorothy Wrinch (b) Harold Jeffreys 1894–1980 1891–1989 The existence of simple laws is, then, apparently, to be regarded as a quality of nature; and accordingly we may infer that it is justifiable to prefer a simple law to a more complex one that fits our observations slightly better.
    [Show full text]
  • Exact Sparse Approximation Problems Via Mixed-Integer Programming: Formulations and Computational Performance
    Exact Sparse Approximation Problems via Mixed-Integer Programming: Formulations and Computational Performance Sébastien Bourguignon, Jordan Ninin, Hervé Carfantan, Marcel Mongeau To cite this version: Sébastien Bourguignon, Jordan Ninin, Hervé Carfantan, Marcel Mongeau. Exact Sparse Approxi- mation Problems via Mixed-Integer Programming: Formulations and Computational Performance. IEEE Transactions on Signal Processing, Institute of Electrical and Electronics Engineers, 2016, 64 (6), pp.1405-1419. 10.1109/TSP.2015.2496367. hal-01254856 HAL Id: hal-01254856 https://hal.archives-ouvertes.fr/hal-01254856 Submitted on 12 Jan 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. 1 Exact Sparse Approximation Problems via Mixed-Integer Programming: Formulations and Computational Performance Sebastien´ Bourguignon, Jordan Ninin, Herve´ Carfantan, and Marcel Mongeau, Member, IEEE Abstract—Sparse approximation addresses the problem of too difficult in practical large-scale instances. Indeed, the approximately fitting a linear model with a solution having as few Q brute-force approach that amounts to exploring all the K non-zero components as possible. While most sparse estimation possible combinations, is computationally prohibitive. In the algorithms rely on suboptimal formulations, this work studies the abundant literature on sparse approximation, much work has performance of exact optimization of `0-norm-based problems through Mixed-Integer Programs (MIPs).
    [Show full text]