Sparse Methods for Model Estimation with Applications to Radar Imaging

Total Page:16

File Type:pdf, Size:1020Kb

Sparse Methods for Model Estimation with Applications to Radar Imaging Sparse Methods for Model Estimation with Applications to Radar Imaging Dissertation Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Christian D. Austin, B.E., B.S., M.S. Graduate Program in Electrical and Computer Engineering The Ohio State University 2012 Dissertation Committee: Dr. Randolph L. Moses, Advisor Dr. Lee C. Potter Dr. Philip Schniter c Copyright by Christian D. Austin 2012 Abstract In additive component model estimation problems, the number of additive com- ponents (model order) and values of the model parameters in each of the additive components are estimated. Traditional methods typically estimate parameters for a set of models with fixed order; parameter estimation is performed over a continuous space when parameters are not discrete. The model order is estimated as the min- imizer, over the set of fixed model orders, of a cost function compromising between signal fit to measurements and model complexity. This dissertation explores dictionary-based estimation methods for joint model order and parameter estimation. In dictionary estimation, the continuous parameter space is discretized, forming a dictionary. Each column of the dictionary is a model component at a sampled parameter value, and a linear combination of a subset of columns is used to represent the model. It is assumed that the model consists of a small number of components, and a sparse reconstruction algorithm is used to select a sparse superposition of columns to represent the signal. The number of columns selected is the estimated model order, and the parameters of each column are the parameter estimates. We examine both static and dynamic dictionary-based estimation methods. In static estimation, the dictionary is fixed, while in dynamic estimation, dictionary pa- rameters adapt to the data. We propose two new dynamic dictionary-based estimation ii algorithms and examine the performance of both static and dynamic algorithms in terms of model order probability and parameter estimation error when dictionaries are highly correlated. Highly correlated dictionaries arise from using closely spaced parameter samples in dictionary formation; we propose a method for selecting algo- rithm settings based on an information criterion. We show the following results: 1) dictionary-based estimation methods are capable of performance comparable to the Cram´er Rao lower bound and to traditional benchmark estimation algorithms over a wide range of signal-to-noise ratios; 2) in the complex exponential model, dictionary- based estimation can superresolve closely spaced frequencies, and 3) dynamic dictio- nary methods overcome parameter estimation bias caused by quantization error in static dictionary-based estimation. We apply dictionary-based estimation to the problem of 3D synthetic aperture radar (SAR) imaging. Traditional 3D SAR image formation requires collection of data over a large contiguous sector of azimuth-elevation aspect angles; this collection is difficult or impossible to obtain in practice. We show that dictionary-based estima- tion can be used to produce well-resolved, wide-angle 3D SAR images from sparse, irregular flight paths. iii In memory my grandfathers, David Austin, and Walter Sobol iv Acknowledgments I would like to thank my family for their continual support, love, and encourage- ment. My parents have always emphasized the importance of education and stood beside me in all of my decisions; I would not be where I am today without them. When in need of someone to talk to in tough times, my mother has always been there for me, bringing the current situation into perspective. I’ve often drawn upon my father’s realist views and life-experiences when in need of motivation or focus. My sister, Marielle, and I left home at the same time and experienced college concurrently, and I feel that we intellectually “grew-up” together. I value the experiences that we had together during this time of our lives. To the rest of my family who inquired into my progress, offered advice, and never questioned my many years of graduate school, I thank you for showing interest in my work and always believing in me. My academic advisor and mentor, Professor Randolph Moses, taught me how to conduct research, from formulating a problem to publishing the results. I thank him not only for teaching me the skills necessary to be a researcher, but for always being very professional and making my graduate school experience a very pleasant one. Having good officemates is an important part of graduate school, given that you share many hours of your graduate life together. I thank both Dr. Josh Ash and Dr. Julie Jackson for being great officemates. We’ve had the opportunity to discuss a multitude of ideas, and many of our conversations have taught me something v new; hopefully, I have reciprocated. One day I hope to implement at least a small fraction of the projects that Josh and I have discussed over the years. Members of the Compressive Sensing reading group also deserve my gratitude, especially Professor Lee Potter, and Professor Phil Schniter, for the valuable conversations that we had about compressive sensing and research in general. I thank my friend, and fellow graduate student, Anthony D’Orazio, for listening to my problems, and at times, just dealing with me during the ups and downs of graduate school. Being able to discuss everyday problems with someone who is in a similar situation and can relate was a great stress-relief. Ed Zelnio of the Air Force Research Laboratory (AFRL) has been supportive of my research and has provided valuable input; I am thankful for his involvement. I also owe my gratitude to Dr. Greg Arnold, who volunteered to be our AFRL collaborator during Ohio Space Grant Consortium funding. Lastly, my graduate research would not have been possible without financial support from the AFRL, Ohio Space Grant Consortium, and NSF IUCRC; I am greatly appreciative of their support. vi Vita November 11, 1980 .........................Born - West Islip, New York 2003 ........................................B.E. Computer Engineering, B.S. Mathematics, State University of New York at Stony Brook 2006 ........................................M.S. Electrical Engineering, The Ohio State University 2003-2004, 2006-2009 .......................Graduate Fellow, The Ohio State University 2004-2006, 2009-present .................... Graduate Research Associate, The Ohio State University Publications C. D. Austin, E. Ertin, and R.L. Moses, “Sparse Signal Methods for 3D Radar Imag- ing,” IEEE Journal of Selected Topics in Signal Processing, vol. 5, no. 3, pp. 408-423, June 2011. C. D. Austin, J. N. Ash and R. L. Moses, “Parameter Estimation Using Sparse Reconstruction With Dynamic Dictionaries,” Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing (ICASSP), Prague, Czech Republic, May 22 - 27, 2011. C. D. Austin, J. N. Ash and R. L. Moses, “Performance Analysis of Sparse 3D SAR Imaging,” Algorithms for Synthetic Aperture Radar Imagery XVIII, SPIE Defense and Security Symposium, Orlando, FL., April 25 - 29, 2011 C. D. Austin, E. Ertin, J. N. Ash, and R. L. Moses, “On the Relation Between Sparse Reconstruction and Parameter Estimation with Model Order Selection,” IEEE Journal of Selected Topics in Signal Processing, vol. 4, no. 3, pp. 560 - 570, June 2010. vii K. E. Dungan, C. D. Austin, J. Nehrbass, and L. C. Potter, “Civilian Vehicle Radar Data Domes,” Algorithms for Synthetic Aperture Radar Imagery XVII, SPIE Defense and Security Symposium, Orlando, FL., April 5 - 9, 2010. C. D. Austin, E. Ertin, and R. L. Moses, “Sparse Multipass 3D SAR Imaging: Appli- cations to the GOTCHA Data Set,” Algorithms for Synthetic Aperture Radar Imagery XVI, SPIE Defense and Security Symposium, Orlando, FL., April 13 - 17, 2009. M. Ferrara, J. A. Jackson, and C. Austin, “Enhancement of Multi-Pass 3D Circu- lar SAR Images using Sparse Reconstruction Techniques,” Algorithms for Synthetic Aperture Radar Imagery XVI, SPIE Defense and Security Symposium, Orlando, FL., April 13 - 17, 2009. C. D. Austin, E. Ertin, J. N. Ash, and R. L. Moses, “On the Relation Between Sparse Sampling and Parametric Estimation,” IEEE 13th DSP workshop and 5th Sig. Proc. Workshop 2009 (DSP/SPE 2009), Jan. 4 - 7, 2009. C. D. Austin and R. L. Moses, “Wide-angle Sparse 3D Synthetic Aperture Radar Imaging for Nonlinear Flight Paths,” IEEE National Aerospace and Electronics Con- ference (NAECON) 2008, July 16 - 18, 2008. E. Ertin, C. D. Austin, S. Sharma, R. L. Moses, and L. C. Potter,“GOTCHA Experi- ence Report: Three-Dimensional SAR Imaging with Complete Circular Apertures,” Algorithms for Synthetic Aperture Radar Imagery XIV, SPIE Defense and Security Symposium, Orlando, FL., April 9 - 13, 2007. C. D. Austin, “Interferometric Synthetic Aperture Radar Height Estimation with Multiple Scattering Centers in a Resolution Cell,” Master’s Thesis, The Ohio State University, 2006. C. D. Austin and R. L. Moses, “Interferometric Synthetic Aperture Radar Detection and Estimation Based 3D Image Reconstruction,” Algorithms for Synthetic Aperture Radar Imagery XIII, SPIE Defense and Security Symposium, Orlando, FL, Apr. 17 - 21, 2006. C. D. Austin and R. L. Moses, “IFSAR Processing for 3D Target Reconstruction,” Algorithms for Synthetic Aperture Radar Imagery XII, SPIE Defense and Security Symposium, Orlando, FL, Mar. 28 - Apr. 1, 2005. viii R. L. Moses, L. C. Potter, E. Ertin, and C. D. Austin, “Synthetic Aperture Radar Visualization,”
Recommended publications
  • Total Variation Deconvolution Using Split Bregman
    Published in Image Processing On Line on 2012{07{30. Submitted on 2012{00{00, accepted on 2012{00{00. ISSN 2105{1232 c 2012 IPOL & the authors CC{BY{NC{SA This article is available online with supplementary materials, software, datasets and online demo at http://dx.doi.org/10.5201/ipol.2012.g-tvdc 2014/07/01 v0.5 IPOL article class Total Variation Deconvolution using Split Bregman Pascal Getreuer Yale University ([email protected]) Abstract Deblurring is the inverse problem of restoring an image that has been blurred and possibly corrupted with noise. Deconvolution refers to the case where the blur to be removed is linear and shift-invariant so it may be expressed as a convolution of the image with a point spread function. Convolution corresponds in the Fourier domain to multiplication, and deconvolution is essentially Fourier division. The challenge is that since the multipliers are often small for high frequencies, direct division is unstable and plagued by noise present in the input image. Effective deconvolution requires a balance between frequency recovery and noise suppression. Total variation (TV) regularization is a successful technique for achieving this balance in de- blurring problems. It was introduced to image denoising by Rudin, Osher, and Fatemi [4] and then applied to deconvolution by Rudin and Osher [5]. In this article, we discuss TV-regularized deconvolution with Gaussian noise and its efficient solution using the split Bregman algorithm of Goldstein and Osher [16]. We show a straightforward extension for Laplace or Poisson noise and develop empirical estimates for the optimal value of the regularization parameter λ.
    [Show full text]
  • Accelerating Matching Pursuit for Multiple Time-Frequency Dictionaries
    Proceedings of the 23rd International Conference on Digital Audio Effects (DAFx-20), Vienna, Austria, September 8–12, 2020 ACCELERATING MATCHING PURSUIT FOR MULTIPLE TIME-FREQUENCY DICTIONARIES ZdenˇekPr˚uša,Nicki Holighaus and Peter Balazs ∗ Acoustics Research Institute Austrian Academy of Sciences Vienna, Austria [email protected],[email protected],[email protected] ABSTRACT An overview of greedy algorithms, a class of algorithms MP Matching pursuit (MP) algorithms are widely used greedy meth- falls under, can be found in [10, 11] and in the context of audio ods to find K-sparse signal approximations in redundant dictionar- and music processing in [12, 13, 14]. Notable applications of MP ies. We present an acceleration technique and an implementation algorithms in the audio domain include analysis [15], [16], coding of the matching pursuit algorithm acting on a multi-Gabor dictio- [17, 18, 19], time scaling/pitch shifting [20] [21], source separation nary, i.e., a concatenation of several Gabor-type time-frequency [22], denoising [23], partial and harmonic detection and tracking dictionaries, consisting of translations and modulations of possi- [24]. bly different windows, time- and frequency-shift parameters. The We present a method for accelerating MP-based algorithms proposed acceleration is based on pre-computing and thresholding acting on a single Gabor-type time-frequency dictionary or on a inner products between atoms and on updating the residual directly concatenation of several Gabor dictionaries with possibly different in the coefficient domain, i.e., without the round-trip to thesig- windows and parameters. The main idea of the present accelera- nal domain.
    [Show full text]
  • Paper, We Present Data Fusion Across Multiple Signal Sources
    68 IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 51, NO. 1, JANUARY 2016 A Configurable 12–237 kS/s 12.8 mW Sparse-Approximation Engine for Mobile Data Aggregation of Compressively Sampled Physiological Signals Fengbo Ren, Member, IEEE, and Dejan Markovic,´ Member, IEEE Abstract—Compressive sensing (CS) is a promising technology framework, the CS framework has several intrinsic advantages. for realizing low-power and cost-effective wireless sensor nodes First, random encoding is a universal compression method that (WSNs) in pervasive health systems for 24/7 health monitoring. can effectively apply to all compressible signals regardless of Due to the high computational complexity (CC) of the recon- struction algorithms, software solutions cannot fulfill the energy what their sparse domain is. This is a desirable merit for the efficiency needs for real-time processing. In this paper, we present data fusion across multiple signal sources. Second, sampling a 12—237 kS/s 12.8 mW sparse-approximation (SA) engine chip and compression can be performed at the same stage in CS, that enables the energy-efficient data aggregation of compressively allowing for a sampling rate that is significantly lower than the sampled physiological signals on mobile platforms. The SA engine Nyquist rate. Therefore, CS has a potential to greatly impact chip integrated in 40 nm CMOS can support the simultaneous reconstruction of over 200 channels of physiological signals while the data acquisition devices that are sensitive to cost, energy consuming <1% of a smartphone’s power budget. Such energy- consumption, and portability, such as wireless sensor nodes efficient reconstruction enables two-to-three times energy saving (WSNs) in mobile and wearable applications [5].
    [Show full text]
  • Gradient Descent in a Bregman Distance Framework † ‡ § ¶ Martin Benning , Marta M
    SIAM J. IMAGING SCIENCES © 2021 Society for Industrial and Applied Mathematics Vol. 14, No. 2, pp. 814{843 Choose Your Path Wisely: Gradient Descent in a Bregman Distance Framework∗ y z x { Martin Benning , Marta M. Betcke , Matthias J. Ehrhardt , and Carola-Bibiane Sch¨onlieb Abstract. We propose an extension of a special form of gradient descent|in the literature known as linearized Bregman iteration|to a larger class of nonconvex functions. We replace the classical (squared) two norm metric in the gradient descent setting with a generalized Bregman distance, based on a proper, convex, and lower semicontinuous function. The algorithm's global convergence is proven for functions that satisfy the Kurdyka{Lojasiewicz property. Examples illustrate that features of different scale are being introduced throughout the iteration, transitioning from coarse to fine. This coarse-to-fine approach with respect to scale allows us to recover solutions of nonconvex optimization problems that are superior to those obtained with conventional gradient descent, or even projected and proximal gradient descent. The effectiveness of the linearized Bregman iteration in combination with early stopping is illustrated for the applications of parallel magnetic resonance imaging, blind deconvolution, as well as image classification with neural networks. Key words. nonconvex optimization, nonsmooth optimization, gradient descent, Bregman iteration, linearized Bregman iteration, parallel MRI, blind deconvolution, deep learning AMS subject classifications. 49M37, 65K05, 65K10, 90C26, 90C30 DOI. 10.1137/20M1357500 1. Introduction. Nonconvex optimization methods are indispensable mathematical tools for a large variety of applications [62]. For differentiable objectives, first-order methods such as gradient descent have proven to be useful tools in all kinds of scenarios.
    [Show full text]
  • Improved Greedy Algorithms for Sparse Approximation of a Matrix in Terms of Another Matrix
    Improved Greedy Algorithms for Sparse Approximation of a Matrix in terms of Another Matrix Crystal Maung Haim Schweitzer Department of Computer Science Department of Computer Science The University of Texas at Dallas The University of Texas at Dallas Abstract We consider simultaneously approximating all the columns of a data matrix in terms of few selected columns of another matrix that is sometimes called “the dic- tionary”. The challenge is to determine a small subset of the dictionary columns that can be used to obtain an accurate prediction of the entire data matrix. Previ- ously proposed greedy algorithms for this task compare each data column with all dictionary columns, resulting in algorithms that may be too slow when both the data matrix and the dictionary matrix are large. A previously proposed approach for accelerating the run time requires large amounts of memory to keep temporary values during the run of the algorithm. We propose two new algorithms that can be used even when both the data matrix and the dictionary matrix are large. The first algorithm is exact, with output identical to some previously proposed greedy algorithms. It takes significantly less memory when compared to the current state- of-the-art, and runs much faster when the dictionary matrix is sparse. The second algorithm uses a low rank approximation to the data matrix to further improve the run time. The algorithms are based on new recursive formulas for computing the greedy selection criterion. The formulas enable decoupling most of the compu- tations related to the data matrix from the computations related to the dictionary matrix.
    [Show full text]
  • Privacy Preserving Identification Using Sparse Approximation With
    Privacy Preserving Identification Using Sparse Approximation with Ambiguization Behrooz Razeghi, Slava Voloshynovskiy, Dimche Kostadinov and Olga Taran Stochastic Information Processing Group, Department of Computer Science, University of Geneva, Switzerland behrooz.razeghi, svolos, dimche.kostadinov, olga.taran @unige.ch f g Abstract—In this paper, we consider a privacy preserving en- Owner Encoder Public Storage coding framework for identification applications covering biomet- +1 N M λx λx L M rics, physical object security and the Internet of Things (IoT). The X × − A × ∈ X 1 ∈ A proposed framework is based on a sparsifying transform, which − X = x (1) , ..., x (m) , ..., x (M) a (m) = T (Wx (m)) n A = a (1) , ..., a (m) , ..., a (M) consists of a trained linear map, an element-wise nonlinearity, { } λx { } and privacy amplification. The sparsifying transform and privacy L p(y (m) x (m)) Encoder | amplification are not symmetric for the data owner and data user. b Schematic Decoding List Probe +1 We demonstrate that the proposed approach is closely related (Private Decoding) y = x (m) + z d (a (m) , b) γL λy λy ≤ − to sparse ternary codes (STC), a recent information-theoretic 1 p (positions) − ´x y = ´x (Pubic Decoding) 1 m M (y) concept proposed for fast approximate nearest neighbor (ANN) ≤ ≤ L Data User b = Tλy (Wy) search in high dimensional feature spaces that being machine learning in nature also offers significant benefits in comparison Fig. 1: Block diagram of the proposed model. to sparse approximation and binary embedding approaches. We demonstrate that the privacy of the database outsourced to a for example biometrics, which being disclosed once, do not server as well as the privacy of the data user are preserved at a represent any more a value for the related security applications.
    [Show full text]
  • Column Subset Selection Via Sparse Approximation of SVD
    Column Subset Selection via Sparse Approximation of SVD A.C¸ivrila,∗, M.Magdon-Ismailb aMeliksah University, Computer Engineering Department, Talas, Kayseri 38280 Turkey bRensselaer Polytechnic Institute, Computer Science Department, 110 8th Street Troy, NY 12180-3590 USA Abstract Given a real matrix A 2 Rm×n of rank r, and an integer k < r, the sum of the outer products of top k singular vectors scaled by the corresponding singular values provide the best rank-k approximation Ak to A. When the columns of A have specific meaning, it might be desirable to find good approximations to Ak which use a small number of columns of A. This paper provides a simple greedy algorithm for this problem in Frobenius norm, with guarantees on( the performance) and the number of columns chosen. The algorithm ~ k log k 2 selects c columns from A with c = O ϵ2 η (A) such that k − k ≤ k − k A ΠC A F (1 + ϵ) A Ak F ; where C is the matrix composed of the c columns, ΠC is the matrix projecting the columns of A onto the space spanned by C and η(A) is a measure related to the coherence in the normalized columns of A. The algorithm is quite intuitive and is obtained by combining a greedy solution to the generalization of the well known sparse approximation problem and an existence result on the possibility of sparse approximation. We provide empirical results on various specially constructed matrices comparing our algorithm with the previous deterministic approaches based on QR factorizations and a recently proposed randomized algorithm.
    [Show full text]
  • Modified Sparse Approximate Inverses (MSPAI) for Parallel
    Technische Universit¨atM¨unchen Zentrum Mathematik Modified Sparse Approximate Inverses (MSPAI) for Parallel Preconditioning Alexander Kallischko Vollst¨andiger Abdruck der von der Fakult¨atf¨ur Mathematik der Technischen Universit¨at M¨unchen zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften (Dr. rer. nat.) genehmigten Dissertation. Vorsitzender: Univ.-Prof. Dr. Peter Rentrop Pr¨ufer der Dissertation: 1. Univ.-Prof. Dr. Thomas Huckle 2. Univ.-Prof. Dr. Bernd Simeon 3. Prof. Dr. Matthias Bollh¨ofer, Technische Universit¨atCarolo-Wilhelmina zu Braunschweig (schriftliche Beurteilung) Die Dissertation wurde am 15.11.2007 bei der Technischen Universit¨ateingereicht und durch die Fakult¨atf¨urMathematik am 18.2.2008 angenommen. ii iii Abstract The solution of large sparse and ill-conditioned systems of linear equations is a central task in numerical linear algebra. Such systems arise from many applications like the discretiza- tion of partial differential equations or image restoration. Herefore, Gaussian elimination or other classical direct solvers can not be used since the dimension of the underlying co- 3 efficient matrices is too large and Gaussian elimination is an O n algorithm. Iterative solvers techniques are an effective remedy for this problem. They allow to exploit sparsity, bandedness, or block structures, and they can be parallelized much easier. However, due to the matrix being ill-conditioned, convergence becomes very slow or even not be guaranteed at all. Therefore, we have to employ a preconditioner. The sparse approximate inverse (SPAI) preconditioner is based on Frobenius norm mini- mization. It is a well-established preconditioner, since it is robust, flexible, and inherently parallel. Moreover, SPAI captures meaningful sparsity patterns automatically.
    [Show full text]
  • A New Algorithm for Non-Negative Sparse Approximation Nicholas Schachter
    A New Algorithm for Non-Negative Sparse Approximation Nicholas Schachter To cite this version: Nicholas Schachter. A New Algorithm for Non-Negative Sparse Approximation. 2020. hal- 02888300v1 HAL Id: hal-02888300 https://hal.archives-ouvertes.fr/hal-02888300v1 Preprint submitted on 2 Jul 2020 (v1), last revised 9 Jun 2021 (v5) HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. A New Algorithm for Non-Negative Sparse Approximation Nicholas Schachter July 2, 2020 Abstract In this article we introduce a new algorithm for non-negative sparse approximation problems based on a combination of the approaches used in orthogonal matching pursuit and basis de-noising pursuit towards solving sparse approximation problems. By taking advantage of structural properties inherent to non-negative sparse approximation problems, a branch and bound (BnB) scheme is developed that enables fast and accurate recovery of underlying dictionary atoms, even in the presence of noise. Detailed analysis of the performance of the algorithm is discussed, with attention specically paid to situations in which the algorithm will perform better or worse based on the properties of the dictionary and the required sparsity of the solution.
    [Show full text]
  • Efficient Implementation of the K-SVD Algorithm Using
    E±cient Implementation of the K-SVD Algorithm using Batch Orthogonal Matching Pursuit Ron Rubinstein¤, Michael Zibulevsky¤ and Michael Elad¤ Abstract The K-SVD algorithm is a highly e®ective method of training overcomplete dic- tionaries for sparse signal representation. In this report we discuss an e±cient im- plementation of this algorithm, which both accelerates it and reduces its memory consumption. The two basic components of our implementation are the replacement of the exact SVD computation with a much quicker approximation, and the use of the Batch-OMP method for performing the sparse-coding operations. - Technical Report CS-2008-08.revised 2008 Batch-OMP, which we also present in this report, is an implementation of the Orthogonal Matching Pursuit (OMP) algorithm which is speci¯cally optimized for sparse-coding large sets of signals over the same dictionary. The Batch-OMP imple- mentation is useful for a variety of sparsity-based techniques which involve coding large numbers of signals. In the report, we discuss the Batch-OMP and K-SVD implementations and analyze their complexities. The report is accompanied by Matlabr toolboxes which implement these techniques, and can be downloaded at http://www.cs.technion.ac.il/~ronrubin/software.html. 1 Introduction Sparsity in overcomplete dictionaries is the basis for a wide variety of highly e®ective signal and image processing techniques. The basic model suggests that natural signals can be e±ciently explained as linear combinations of prespeci¯ed atom signals, where the linear coe±cients are sparse (most of them zero). Formally, if x is a column signal and D is the dictionary (whose columns are the atom signals), the sparsity assumption can be described by the following sparse approximation problem, 2 γ^ = Argmin γ 0 Subject To x Dγ ² : (1.1) γ k k k ¡ k2 · Technion - Computer Science Department In this formulation, γ is the sparse representation of x, ² the error tolerance, and k ¢ k0 is the `0 pseudo-norm which counts the non-zero entries.
    [Show full text]
  • Accelerated Split Bregman Method for Image Compressive Sensing Recovery Under Sparse Representation
    KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS VOL. 10, NO. 6, Jun. 2016 2748 Copyright ⓒ2016 KSII Accelerated Split Bregman Method for Image Compressive Sensing Recovery under Sparse Representation Bin Gao1, Peng Lan2, Xiaoming Chen1, Li Zhang3 and Fenggang Sun2 1 College of Communications Engineering, PLA University of Science and Technology Nanjing, 210007, China [e-mail: [email protected], [email protected]] 2 College of Information Science and Engineering, Shandong Agricultural University Tai’an, 271018, China [e-mail: {lanpeng, sunfg}@sdau.edu.cn] 3 College of Optoelectronic Engineering, Nanjing University of Posts and Telecommunications Nanjing,, 210023, China [e-mail: [email protected]] *Corresponding author: Bin Gao Received December 14, 2015; revised March 13, 2016; accepted May 5, 2016; published June 30, 2016 Abstract Compared with traditional patch-based sparse representation, recent studies have concluded that group-based sparse representation (GSR) can simultaneously enforce the intrinsic local sparsity and nonlocal self-similarity of images within a unified framework. This article investigates an accelerated split Bregman method (SBM) that is based on GSR which exploits image compressive sensing (CS). The computational efficiency of accelerated SBM for the measurement matrix of a partial Fourier matrix can be further improved by the introduction of a fast Fourier transform (FFT) to derive the enhanced algorithm. In addition, we provide convergence analysis for the proposed method. Experimental results demonstrate that accelerated SBM is potentially faster than some existing image CS reconstruction methods. Keywords: Compressive sensing, sparse representation, split Bregman method, accelerated split Bregman method, image restoration http://dx.doi.org/10.3837/tiis.2016.06.016 ISSN : 1976-7277 KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS VOL.
    [Show full text]
  • Regularized Dictionary Learning for Sparse Approximation
    16th European Signal Processing Conference (EUSIPCO 2008), Lausanne, Switzerland, August 25-29, 2008, copyright by EURASIP REGULARIZED DICTIONARY LEARNING FOR SPARSE APPROXIMATION M. Yaghoobi, T. Blumensath, M. Davies Institute for Digital Communications, Joint Research Institute for Signal and Image Processing, University of Edinburgh, UK ABSTRACT keeping the dictionary fixed. This is followed by a second step in Sparse signal models approximate signals using a small number of which the sparse coefficients are kept fixed and the dictionary is elements from a large set of vectors, called a dictionary. The suc- optimized. This algorithm runs for a specific number of alternating cess of such methods relies on the dictionary fitting the signal struc- optimizations or until a specific approximation error is reached. The ture. Therefore, the dictionary has to be designed to fit the signal proposed method is based on such an alternating optimization (or class of interest. This paper uses a general formulation that allows block-relaxed optimization) method with some advantages over the the dictionary to be learned form the data with some a priori in- current methods in the condition and speed of convergence. formation about the dictionary. In this formulation a universal cost If the set of training samples is {y(i) : 1 ≤ i ≤ L}, where L function is proposed and practical algorithms are presented to min- is the number of training vectors, then sparse approximations are imize this cost under different constraints on the dictionary. The often found (for all i : 1 ≤ i ≤ L ) by, proposed methods are compared with previous approaches using (i) (i) 2 p synthetic and real data.
    [Show full text]