
Multi-dimensional sparse structured signal approximation using split bregman iterations Yoann Isaac, Quentin Barthélemy, Cedric Gouy-Pailler, Jamal Atif, Michèle Sebag To cite this version: Yoann Isaac, Quentin Barthélemy, Cedric Gouy-Pailler, Jamal Atif, Michèle Sebag. Multi-dimensional sparse structured signal approximation using split bregman iterations. ICASSP 2013 - 38th IEEE International Conference on Acoustics, Speech and Signal Processing, May 2013, Vancouver, Canada. pp.3826-3830. hal-00862645 HAL Id: hal-00862645 https://hal.archives-ouvertes.fr/hal-00862645 Submitted on 30 Apr 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. MULTI-DIMENSIONAL SPARSE STRUCTURED SIGNAL APPROXIMATION USING SPLIT BREGMAN ITERATIONS Yoann Isaac 1,2, Quentin Barth elemy´ 1, Jamal Atif 2, C edric´ Gouy-Pailler 1, Mich ele` Sebag2 1 CEA, LIST 2 TAO, CNRS − INRIA − LRI Data Analysis Tools Laboratory Universit e´ Paris-Sud 91191 Gif-sur-Yvette CEDEX, FRANCE 91405 Orsay, FRANCE ABSTRACT structured decomposition property is enforced through adding a total variation (TV) penalty to the minimization objective. The paper focuses on the sparse approximation of signals In the 1D case, the minimization of the above overall ob- using overcomplete representations, such that it preserves the jective can be tackled using the fused-LASSO approach first (prior) structure of multi-dimensional signals. The underlying introduced in [5]. In the case of multi-dimensional signals 1 optimization problem is tackled using a multi-dimensional however, the minimization problem presents additional dif- extension of the split Bregman optimization approach. An ficulties. The first contribution of the paper is to show how extensive empirical evaluation shows how the proposed ap- this problem can be handled efficiently, by extending the (1D) proach compares to the state of the art depending on the split Bregman fused-LASSO solver presented in [6], to the signal features. multi-dimensional case. The second contribution is a com- prehensive experimental study, comparing state-of-the-art al- Index Terms — Sparse approximation, Regularization, gorithms to the presented approach referred to as Multi-SSSA Fused-LASSO, Split Bregman, Multidimensional signals and establishing their relative performance depending on di- verse features of the structured signals. The section 2 introduces the formal background. The pro- 1. INTRODUCTION posed optimization approach is described in section 3.1. Sec- tion 4 presents our experimental setting and reports on the re- Dictionary-based representations proceed by approximating sults. The presented approach is discussed w.r.t. related work a signal via a linear combination of dictionary elements, re- in section 5 and the paper concludes with some perspectives ferred to as atoms. Sparse dictionary-based representations, for further research. where each signal involves but a few atoms, have been thor- oughly investigated for 1D and 2D signals for their good prop- erties, as they enable robust transmission (compressed sens- ing [1]) or image in-painting [2]. The dictionary is either 2. PROBLEM STATEMENT given, based on the domain knowledge, or learned from the RC×T signals [3]. Let Y = [ y1,..., yT ] ∈ be a matrix made of T C- RC×N The so-called sparse approximation algorithm aims at dimensional signals, Φ ∈ an overcomplete dictionary finding a sparse approximate representation of the considered of N atoms (N > C ). We consider the linear model signals using this dictionary, by minimizing a weighted sum of the approximation loss and the representation sparsity (see yt = Φ xt + et, t ∈ 1,...,T , (1) [4] for a survey). When available, prior knowledge about N×T the application domain can also be used to guide the search in which X = [ x1,..., xT ] ∈ R stands for the decompo- C×T toward “plausible” decompositions. sition matrix and E = [ e1,..., eT ] ∈ R is a zero-mean Gaussian noise matrix. This paper focuses on sparse approximation enforcing a The sparse structured decomposition problem consists of ap- structured decomposition property, defined as follows. Let proximating the yi, i ∈ { 1,...,T } by decomposing them on the signals be structured (e.g. being recorded in consecutive the dictionary Φ, such that the structure of the decompositions time steps); the structured decomposition property then re- xi reflects that of the signals yi. This goal is formalized as quires that the signal structure is preserved in the dictionary- based representation (e.g. the atoms involved in the approx- 1Our motivating application considers electro-encephalogram (EEG) sig- imation of consecutive signals have “close” weights). The nals, where the number of sensors ranges up to a few hundreds. the minimization2 of the objective function Only few iterations of this system are necessary for conver- 2 gence. In our implementation, this update is only performed min kY − ΦXk2 + λ1kXk1 + λ2kXP k1 , (2) X once at each iteration of the global optimization algorithm. where λ1 and λ2 are regularization coefficients and P encodes the signal structure (provided by the prior knowledge) as in Eq.(9), Eq.(10) could be resolved with the soft-thresholding [7]. In the remainder of the paper, the considered structure is operator that of the temporal ordering of the signals, i.e. kXP k1 = i+1 i+1 i A = SoftThreshold λ1 (X + D ) (11) T k.k1 A t=2 kXt − Xt−1k1. µ1 P i+1 i+1 i B λ2 X P D . (12) = SoftThreshold 1 ( + B) µ1 k.k 3. OPTIMIZATION STRATEGY Solving Eq.(8) requires the minimization of a convex differ- 3.1. Algorithm description entiable function which can be performed via classical opti- mization methods. We proposed here to solve it determin- Bregman iterations have shown to be very efficient for ℓ1 reg- ularized problem [8]. For convex problems with linear con- istically. The main difficulty in extending [6] to the multi- straints, the split Bregman iteration technique is equivalent dimensional signals case rely on this step. Let us define H to the method of multipliers and the augmented Lagrangian from Eq.(8) such as one [9]. The iteration scheme presented in [6] considers an +1 Xi = argmin H(X) . (13) augmented Lagrangian formalism. We have chosen here to X present ours with the initial split Bregman formulation. Differentiating this expression with respect to X yields First, let us restate the sparse approximation problem d T T H = (2Φ Φ + µ1I)X + X(µ2P P ) − 2Φ Y (14) 2 dX min kY − ΦXk + λ1kAk1 + λ2kBk1 X 2 i i i i T s.t. A = X . (3) +µ1(DA − A ) + µ2(DB − B )P , (15) B = XP where I is the identity matrix. The minimum Xˆ = Xi+1 This reformulation is a key step of the split Bregman method. d ˆ of Eq.(8) is obtained by solving dX H(X) = 0 which is a It decouples the three terms and allows to optimize them sep- Sylvester equation arately within the Bregman iterations. To set-up this iteration scheme, Eq.(3) must be transform to an unconstrained prob- W Xˆ + XZˆ = Ci , (16) lem T T i 2 with W = 2Φ Φ + µ1I, Z = µ2P P and C = −U + min X,A,B kY − ΦXk2 + λ1kAk1 + λ2kBk1 i i i T µ1 µ2 . (4) 2Φ Y + µ1A + ( µ2B − V )P . Fortunately, in our case, W + kX − Ak2 + kXP − Bk2 2 2 2 2 and Z are real symmetric matrix. Thus, they can be diagonal- The split Bregman iterations could then be expressed as [8] ized as follow: i+1 i+1 i+1 2 (X , A , B ) = argmin X,A,B kY − ΦXk2 T W = F D wF (17) +λ1kAk1 + λ2kBk1 (5) T Z = GD zG (18) µ1 i 2 + 2 kX − A + DAk2 µ2 i 2 and Eq.(16) can then be rewritten + 2 kXP − B + DBk2 i+1 +1 +1 D Di Xi Ai (6) ′ ′ i′ A = A + ( − ) DwXˆ + Xˆ Dz = C , (19) i+1 i i+1 i+1 DB = DB + ( X P − B ) (7) with Xˆ ′ = F T XGˆ and Ci′ = F T CiG. Xˆ ′ is then obtained Thanks to the split of the three terms realized above, the min- by imization of Eq.(5) could be realized iteratively by alterna- ′ −1 i′ tively updating variables in the system ∀s ∈ { 1,...,S } Xˆ (: , s ) = ( Dw + Dz(s, s )I) C (: , s ) Xi+1 =argmin kY − ΦXk2 + µ1 kX − Ai + Di k2 X 2 2 A 2 where the notation (: , s ) indices the column s of matrices. µ2 i i 2 + 2 kXP − B + DBk2 (8) Going back to Xˆ could be performed with: Xˆ = F Xˆ ′GT . i+1 µ1 i+1 i 2 A =argmin A λ1kAk1 + 2 kX − A + DAk2 (9) W and Z being independent of the iteration ( i) considered, i+1 µ2 i+1 i 2 theirs diagonalization is done only once and for all as well B =argmin B λ2kBk1 + 2 kX P − B + DBk2(10) −1 as the computation of the terms (Dw + Dz(s, s )I) ∀s ∈ 1 2 p p {1,...,S }. Thus, this update does not require heavy compu- kAkp = ( Pi Pj |Ai,j | ) . The case p = 2 corresponds to the clas- sical Frobenius norm tation. The full algorithm is summarized below. 3.2. Algorithm sum up activity and d it’s duration. A decomposition matrix X could then be written: 1: Input: Y , Φ, P 2: Parameters: λ1, λ2, µ1, µ2, ǫ, iterMax , kMax M 0 0 0 3: Init DA, DB and X 0 0 0 0 T T X = aiPind i,m i,d i (21) 4: A = X P , B = X , Y = 2Φ Φ + µ1I, Z = µ2P P X i=1 5: Compute Dw, Dz, F and G.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-