Sparse Approximation of Pdes Based on Compressed Sensing

Total Page:16

File Type:pdf, Size:1020Kb

Sparse Approximation of Pdes Based on Compressed Sensing Sparse Approximation of PDEs based on Compressed Sensing Simone Brugiapaglia Department of Mathematics Simon Fraser University Retreat for Young Researchers in Stochastics September 24, 2016 1 Introduction We address the following question Can we employ Compressed Sensing to solve a PDE? In particular, we consider the weak formulation of a PDE find u 2 U : a(u; v) = F(v); 8v 2 V; focusing on the Petrov-Galerkin (PG) discretization method [Aziz and Babuška, 1972]. Motivation: I reduce the computational cost associated with a classical PG discretization; I situations with a limited budget of evaluations of F(·); I better theoretical understanding of the PG method. Case study: . Advection-diffusion-reaction (ADR) equation, with 1 d U = V = H0 (Ω), Ω = [0; 1] , and a(u; v) = (ηru; rv) + (b · ru; v) + (ρu, v); F(v) = (f; v): 2 Compressed Sensing CORSING (COmpRessed SolvING) A theoretical study of CORSING 3 Compressed Sensing (CS) [D. Donoho, 2006; E. Candès, J. Romberg, and T. Tao, 2006] A sparse vector u N Consider a signal s 2 C , sparse w.r.t. 2 Ψ 2 CN×N : 1.5 1 i u 0.5 s = Ψu and kuk0 =: s N; 0 −0.5 −1 where kuk0 := #fi : ui 6= 0g. 0 20 40 60 80 100 component i It can be acquired by means of m N linear and non-adaptive measurements hs; 'ii =: fi; for i = 1; : : : ; m: N×m If we consider the matrix Φ = ['i] 2 C , we have Au = f; H where A = Φ Ψ 2 Cm×N and f 2 Cm. 4 Sensing phase A picture to have in mind f ΦtH* u = sensing matrix measurements vector sparsity basis unknown sparse signal Since m N, the system Au = f is highly underdetermined. How to recover the right u among its infinite solutions? 5 Recovery: finding a needle in a haystack Thanks to the sparsity hypothesis, we can resort to sparse recovery techniques. We aim at approximating the solution to (P0) min kuk0; s.t. Au = f: N u2C Unfortunately, in general (P0) is a NP-hard problem... / There are computationally tractable strategies to approximate it! In, particular, we employ the greedy algorithm Orthogonal Matching Pursuit (OMP) to approximate " s (P0) min kuk0 (P0) min kAu − fk2 N or N u2C u2C s.t. kAu − fk2 ≤ " s.t. kuk0 ≤ s: Another valuable option is convex relaxation (not discussed here) (P1) min kuk1; s.t. Au = f: N u2C 6 Orthogonal Matching Pursuit (OMP) Input: m×N 2 Matrix A 2 C , with ` -normalized columns m Vector f 2 C Tolerance on the residual " > 0 (or else, sparsity s 2 [N]) Output: " s Approximate solution u to (P0) (or else, (P0)) Procedure: 1: S ; . Initialization 2: u 0 3: while kAu − fk2 > " (or else, kuk0 < s) do H 4: j arg max j[A (Au − f)]j j . Select new index j2[N] 5: S S[f jg . Enlarge support 6: u arg min kAz − fk2 s.t. supp(z) ⊆S . Minimize residual N z2C 7: end while 8: return u s I The computational cost for the (P0) formulation is in general O(smN). 7 Recovery results based on the RIP Many important recovery results in CS are based on the Restricted Isometry Property (RIP). Definition (RIP) m×N A matrix A 2 C satisfies the RIP(s; δ) iff 2 2 2 N N (1 − δ)kuk2 ≤ kAuk2 ≤ (1 + δ)kuk2; 8u 2 Σs := fv 2 C : kvk0 ≤ sg: Among many others, the RIP implies the following recovery result for OMP. [T. Zhang, 2011; A. Cohen, W. Dahmen, R. DeVore, 2015] Theorem (RIP ) OMP recovery) There exist K 2 N, C > 0 and δ 2 (0; 1) s.t. for every s 2 N, the following holds: if A 2 RIP((K + 1)s; δ); m then, for any f 2 C , the OMP algorithm computes in Ks iterationsa solution u that fulfills kAu − fk2 ≤ C inf kAw − fk2: N w2Σs > 8 Compressed Sensing CORSING (COmpRessed SolvING) A theoretical study of CORSING 9 The reference problem Given two Hilbert spaces U; V , consider the following problem find u 2 U : a(u; v) = F(v); 8v 2 V; (1) where a : U × V ! R is a bilinear form and F 2 V ∗. We will assume a(·; ·) to fulfill a(u; v) 9α > 0 : inf sup ≥ α; (2) u2U v2V kukU kvkV ja(u; v)j 9β > 0 : sup sup ≤ β; (3) u2U v2V kukU kvkV sup a(u; v) > 0; 8v 2 V n f0g: (4) u2U . (2) + (3) + (4) =) 9! solution to (1). [J. Nečas, 1962] . We will focus on advection-diffusion-reaction (ADR) equations. 10 The Petrov-Galerkin method d Given Ω ⊆ R , consider the weak formulation of an ADR equation: 1 1 find u 2 H0 (Ω) : (ηru; rv) + (bru; v) + (ρu, v) = (f; v); 8v 2 H0 (Ω): | {z } | {z } a(u;v) F(v) (ADR) N 1 M 1 Choose U ⊆ H0 (Ω) and V ⊆ H0 (Ω) with N M U = spanf 1; : : : ; N g;V = spanf'1;:::;'M g | {z } | {z } trials tests Then we can discretize (ADR) as Aub = f;Aij = a( j;' i); fi = F('i) M×N M with A 2 C , f 2 C . A common choice is M = N. I Examples of Petrov-Galerkin methods: Finite elements, spectral methods, collocation methods, etc. 11 The main analogy A fundamental analogy guided us through the development of our method... Petrov-Galerkin method: Sampling: solution of a PDE () signal tests (bilinear form) measurements (inner product) Reference: Compressed solving: a numerical approximation technique for elliptic PDEs based on compressed sensing S. B., S. Micheletti, S. Perotto Comput. Math. Appl. 2015; 70(6):1306-1335 12 Related literature Ancestors: PDE solvers based on `1-minimization 1988 [J. Lavery, 1988; J. Lavery, 1989] Inviscid Burgers’ equation, conservation laws 2004 [J.-L. Guermond, 2004; J.-L. Guermond and B. Popov, 2009] Hamilton-Jacobi, transport equation CS techniques for PDEs 2010 [S. Jokar, V. Mehrmann, M. Pfetsch, and H. Yserentant, 2010] Recursive mesh refinement based on CS (Poisson equation) 2011 [A. Doostan and H. Owhadi, 2011; K. Sargsyan et al., 2014; H. Rauhut and C. Schwab, 2014; J.-L. Bouchot et al., 2015] Application of CS to parametric PDEs and Uncertainty Quantification 2015 [S. B., S. Micheletti, S. Perotto, 2015; S. B., F. Nobile, S. Micheletti, S. Perotto, 2016] CORSING for ADR problems 13 CORSING (COmpRessed SolvING) Assembly phase ¬ Choose two sets of N independent elements of U and V : trials ! f 1; : : : ; N g; f'1;:::;'N g tests; ­ choose m N tests f'τ1 ;:::;'τm g: ® build A 2 Cm×N and f 2 Cm as [A]ij := a( j;' τi )[f]i := F('τi ): Recovery phase N Find a compressed solution um to Au = f, via sparse recovery. 14 CORSING (COmpRessed SolvING) Assembly phase ¬ Choose two sets of N independent elements of U and V : trials ! f 1; : : : ; N g; f'1;:::;'N g tests; ­ choose m N tests f'τ1 ;:::;'τm g: ® build A 2 Cm×N and f 2 Cm as [A]ij := a( j;' τi )[f]i := F('τi ): Recovery phase N Find a compressed solution um to Au = f, via sparse recovery. 15 Classical case: square matrices When dealing with Petrov-Galerkin discretizations, one usually ends up with a big square matrix. 1 2 3 4 5 6 7 ####### 2 3 2 3 2 3 '1 ! × × × × × × × u1 F('1) '2 ! 6 × × × × × × × 7 6u27 6F('2)7 6 7 6 7 6 7 '3 ! 6 × × × × × × × 7 6u37 6F('3)7 6 7 6 7 6 7 '4 ! 6 × × × × × × × 7 6u47 = 6F('4)7 6 7 6 7 6 7 '5 ! 6 × × × × × × × 7 6u57 6F('5)7 6 7 6 7 6 7 '6 ! 4 × × × × × × × 5 4u65 4F('6)5 '7 ! × × × × × × × u7 F('7) | {z } a( j ;'i) 16 “Compressing” the discretization We would like to use only m random tests instead of N, with m N... 1 2 3 4 5 6 7 ####### 2 3 2 3 2 3 '1 ! ××××××× u1 F('1) '2 ! 6 ××××××× 7 6u27 6F('2)7 6 7 6 7 6 7 '3 ! 6 ××××××× 7 6u37 6F('3)7 6 7 6 7 6 7 '4 ! 6 ××××××× 7 6u47 = 6F('4)7 6 7 6 7 6 7 '5 ! 6 ××××××× 7 6u57 6F('5)7 6 7 6 7 6 7 '6 ! 4 ××××××× 5 4u65 4F('6)5 '7 ! ××××××× u7 F('7) | {z } a( j ;'i) 17 Sparse recovery ...in order to obtain a reduced discretization. 1 2 3 4 5 6 7 ####### 2 3 ' ! ××××××× u1 F(' ) 2 = 2 ' ! ××××××× 6u27 F(' ) 5 6 7 5 | {z } 6u37 a( j ;'i) 6 7 6u47 6 7 6u57 6 7 4u65 u7 The solution is then computed using sparse recovery techniques. 18 One heuristic criterion commonly used in CS is to choose one basis sparse in space, and the other in frequency. Hierarchical hat functions Sine functions [O. Zienkiewicz et al., 1982] H 0,0 0.5 0.5 0.4 0.4 H H 0.3 S 1,0 1,1 2 S 0.2 1 0.3 S H H H H 5 2,0 2,1 2,2 2,3 0.1 S 3 0.2 0 S 4 0.1 −0.1 −0.2 0 0 0.2 0.4 0.6 0.8 1 −0.3 0 0.2 0.4 0.6 0.8 1 H S We name the corresponding strategies CORSING HS and SH. How to choose f jg and f'ig? 19 H 0,0 0.5 0.5 0.4 0.4 H H 0.3 S 1,0 1,1 2 S 0.2 1 0.3 S H H H H 5 2,0 2,1 2,2 2,3 0.1 S 3 0.2 0 S 4 0.1 −0.1 −0.2 0 0 0.2 0.4 0.6 0.8 1 −0.3 0 0.2 0.4 0.6 0.8 1 How to choose f jg and f'ig? One heuristic criterion commonly used in CS is to choose one basis sparse in space, and the other in frequency.
Recommended publications
  • Compressive Phase Retrieval Lei Tian
    Compressive Phase Retrieval by Lei Tian B.S., Tsinghua University (2008) S.M., Massachusetts Institute of Technology (2010) Submitted to the Department of Mechanical Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY June 2013 c Massachusetts Institute of Technology 2013. All rights reserved. Author.............................................................. Department of Mechanical Engineering May 18, 2013 Certified by. George Barbastathis Professor Thesis Supervisor Accepted by . David E. Hardt Chairman, Department Committee on Graduate Students 2 Compressive Phase Retrieval by Lei Tian Submitted to the Department of Mechanical Engineering on May 18, 2013, in partial fulfillment of the requirements for the degree of Doctor of Philosophy Abstract Recovering a full description of a wave from limited intensity measurements remains a central problem in optics. Optical waves oscillate too fast for detectors to measure anything but time{averaged intensities. This is unfortunate since the phase can reveal important information about the object. When the light is partially coherent, a complete description of the phase requires knowledge about the statistical correlations for each pair of points in space. Recovery of the correlation function is a much more challenging problem since the number of pairs grows much more rapidly than the number of points. In this thesis, quantitative phase imaging techniques that works for partially co- herent illuminations are investigated. In order to recover the phase information with few measurements, the sparsity in each underly problem and efficient inversion meth- ods are explored under the framework of compressed sensing. In each phase retrieval technique under study, diffraction during spatial propagation is exploited as an ef- fective and convenient mechanism to uniformly distribute the information about the unknown signal into the measurement space.
    [Show full text]
  • Compressed Digital Holography: from Micro Towards Macro
    Compressed digital holography: from micro towards macro Colas Schretter, Stijn Bettens, David Blinder, Beatrice Pesquet-Popescu, Marco Cagnazzo, Frederic Dufaux, Peter Schelkens To cite this version: Colas Schretter, Stijn Bettens, David Blinder, Beatrice Pesquet-Popescu, Marco Cagnazzo, et al.. Compressed digital holography: from micro towards macro. Applications of Digital Image Processing XXXIX, SPIE, Aug 2016, San Diego, United States. hal-01436102 HAL Id: hal-01436102 https://hal.archives-ouvertes.fr/hal-01436102 Submitted on 10 Jan 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Compressed digital holography: from micro towards macro Colas Schrettera,c, Stijn Bettensa,c, David Blindera,c, Béatrice Pesquet-Popescub, Marco Cagnazzob, Frédéric Dufauxb, and Peter Schelkensa,c aDept. of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, Brussels, Belgium bLTCI, CNRS, Télécom ParisTech, Université Paris-Saclay, Paris, France ciMinds, Technologiepark 19, Zwijnaarde, Belgium ABSTRACT The age of computational imaging is merging the physical hardware-driven approach of photonics with advanced signal processing methods from software-driven computer engineering and applied mathematics. The compressed sensing theory in particular established a practical framework for reconstructing the scene content using few linear combinations of complex measurements and a sparse prior for regularizing the solution.
    [Show full text]
  • Compressive Sensing Off the Grid
    Compressive sensing off the grid Abstract— We consider the problem of estimating the fre- Indeed, one significant drawback of the discretization quency components of a mixture of s complex sinusoids approach is the performance degradation when the true signal from a random subset of n regularly spaced samples. Unlike is not exactly supported on the grid points, the so called basis previous work in compressive sensing, the frequencies are not assumed to lie on a grid, but can assume any values in the mismatch problem [13], [15], [16]. When basis mismatch normalized frequency domain [0; 1]. We propose an atomic occurs, the true signal cannot be sparsely represented by the norm minimization approach to exactly recover the unobserved assumed dictionary determined by the grid points. One might samples, which is then followed by any linear prediction method attempt to remedy this issue by using a finer discretization. to identify the frequency components. We reformulate the However, increasing the discretization level will also increase atomic norm minimization as an exact semidefinite program. By constructing a dual certificate polynomial using random the coherence of the dictionary. Common wisdom in com- kernels, we show that roughly s log s log n random samples pressive sensing suggests that high coherence would also are sufficient to guarantee the exact frequency estimation with degrade the performance. It remains unclear whether over- high probability, provided the frequencies are well separated. discretization is beneficial to solving the problems. Finer Extensive numerical experiments are performed to illustrate gridding also results in higher computational complexity and the effectiveness of the proposed method.
    [Show full text]
  • Sparse Penalties in Dynamical System Estimation
    1 Sparsity Penalties in Dynamical System Estimation Adam Charles, M. Salman Asif, Justin Romberg, Christopher Rozell School of Electrical and Computer Engineering Georgia Institute of Technology, Atlanta, Georgia 30332-0250 Email: facharles6,sasif,jrom,[email protected] Abstract—In this work we address the problem of state by the following equations: estimation in dynamical systems using recent developments x = f (x ) + ν in compressive sensing and sparse approximation. We n n n−1 n (1) formulate the traditional Kalman filter as a one-step update yn = Gnxn + n optimization procedure which leads us to a more unified N framework, useful for incorporating sparsity constraints. where xn 2 R represents the signal of interest, N N We introduce three combinations of two sparsity conditions fn(·)jR ! R represents the (assumed known) evo- (sparsity in the state and sparsity in the innovations) M lution of the signal from time n − 1 to n, yn 2 R and write recursive optimization programs to estimate the M is a set of linear measurements of xn, n 2 R state for each model. This paper is meant as an overview N of different methods for incorporating sparsity into the is the associated measurement noise, and νn 2 R dynamic model, a presentation of algorithms that unify the is our modeling error for fn(·) (commonly known as support and coefficient estimation, and a demonstration the innovations). In the case where Gn is invertible that these suboptimal schemes can actually show some (N = M and the matrix has full rank), state estimation performance improvements (either in estimation error or at each iteration reduces to a least squares problem.
    [Show full text]
  • Compressed Sensing Applied to Modeshapes Reconstruction Joseph Morlier, Dimitri Bettebghor
    Compressed sensing applied to modeshapes reconstruction Joseph Morlier, Dimitri Bettebghor To cite this version: Joseph Morlier, Dimitri Bettebghor. Compressed sensing applied to modeshapes reconstruction. XXX Conference and exposition on structural dynamics (IMAC 2012), Jan 2012, Jacksonville, FL, United States. pp.1-8, 10.1007/978-1-4614-2425-3_1. hal-01852318 HAL Id: hal-01852318 https://hal.archives-ouvertes.fr/hal-01852318 Submitted on 1 Aug 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. OATAO is an open access repository that collects the work of Toulouse researchers and makes it freely available over the web where possible. This is an author-deposited version published in: http://oatao.univ-toulouse.fr/ Eprints ID: 5163 To cite this document: Morlier, Joseph and Bettebghor, Dimitri Compressed sensing applied to modeshapes reconstruction. (2011) In: IMAC XXX Conference and exposition on structural dynamics, 30 Jan – 02 Feb 2012, Jacksonville, USA. Any correspondence concerning this service should be sent to the repository administrator: [email protected] Compressed sensing applied to modeshapes reconstruction Joseph Morlier 1*, Dimitri Bettebghor 2 1 Université de Toulouse, ICA, ISAE DMSM ,10 avenue edouard Belin 31005 Toulouse cedex 4, France 2 Onera DTIM MS2M, 10 avenue Edouard Belin, Toulouse, France * Corresponding author, Email: [email protected], Phone no: + (33) 5 61 33 81 31, Fax no: + (33) 5 61 33 83 30 ABSTRACT Modal analysis classicaly used signals that respect the Shannon/Nyquist theory.
    [Show full text]
  • Kalman Filtered Compressed Sensing
    KALMAN FILTERED COMPRESSED SENSING Namrata Vaswani Dept. of ECE, Iowa State University, Ames, IA, [email protected] ABSTRACT that we address is: can we do better than performing CS at each time separately, if (a) the sparsity pattern (support set) of the transform We consider the problem of reconstructing time sequences of spa- coefficients’ vector changes slowly, i.e. every time, none or only a tially sparse signals (with unknown and time-varying sparsity pat- few elements of the support change, and (b) a prior model on the terns) from a limited number of linear “incoherent” measurements, temporal dynamics of its current non-zero elements is available. in real-time. The signals are sparse in some transform domain re- Our solution is motivated by reformulating the above problem ferred to as the sparsity basis. For a single spatial signal, the solu- as causal minimum mean squared error (MMSE) estimation with a tion is provided by Compressed Sensing (CS). The question that we slow time-varying set of dominant basis directions (or equivalently address is, for a sequence of sparse signals, can we do better than the support of the transform vector). If the support is known, the CS, if (a) the sparsity pattern of the signal’s transform coefficients’ MMSE solution is given by the Kalman filter (KF) [9] for this sup- vector changes slowly over time, and (b) a simple prior model on port. But what happens if the support is unknown and time-varying? the temporal dynamics of its current non-zero elements is available. The initial support can be estimated using CS [7].
    [Show full text]
  • Robust Compressed Sensing Using Generative Models
    Robust Compressed Sensing using Generative Models Ajil Jalal ∗ Liu Liu ECE, UT Austin ECE, UT Austin [email protected] [email protected] Alexandros G. Dimakis Constantine Caramanis ECE, UT Austin ECE, UT Austin [email protected] [email protected] Abstract The goal of compressed sensing is to estimate a high dimensional vector from an underdetermined system of noisy linear equations. In analogy to classical compressed sensing, here we assume a generative model as a prior, that is, we assume the vector is represented by a deep generative model G : Rk ! Rn. Classical recovery approaches such as empirical risk minimization (ERM) are guaranteed to succeed when the measurement matrix is sub-Gaussian. However, when the measurement matrix and measurements are heavy-tailed or have outliers, recovery may fail dramatically. In this paper we propose an algorithm inspired by the Median-of-Means (MOM). Our algorithm guarantees recovery for heavy-tailed data, even in the presence of outliers. Theoretically, our results show our novel MOM-based algorithm enjoys the same sample complexity guarantees as ERM under sub-Gaussian assumptions. Our experiments validate both aspects of our claims: other algorithms are indeed fragile and fail under heavy-tailed and/or corrupted data, while our approach exhibits the predicted robustness. 1 Introduction Compressive or compressed sensing is the problem of reconstructing an unknown vector x∗ 2 Rn after observing m < n linear measurements of its entries, possibly with added noise: y = Ax∗ + η; where A 2 Rm×n is called the measurement matrix and η 2 Rm is noise. Even without noise, this is an underdetermined system of linear equations, so recovery is impossible without a structural assumption on the unknown vector x∗.
    [Show full text]
  • An Overview of Compressed Sensing
    What is Compressed Sensing? Main Results Construction of Measurement Matrices Some Topics Not Covered An Overview of Compressed sensing M. Vidyasagar FRS Cecil & Ida Green Chair, The University of Texas at Dallas [email protected], www.utdallas.edu/∼m.vidyasagar Distinguished Professor, IIT Hyderabad [email protected] Ba;a;ga;va;tua;l .=+a;ma mUa; a;tRa and Ba;a;ga;va;tua;l Za;a:=+d;a;}ba Memorial Lecture University of Hyderabad, 16 March 2015 M. Vidyasagar FRS Overview of Compressed Sensing What is Compressed Sensing? Main Results Construction of Measurement Matrices Some Topics Not Covered Outline 1 What is Compressed Sensing? 2 Main Results 3 Construction of Measurement Matrices Deterministic Approaches Probabilistic Approaches A Case Study 4 Some Topics Not Covered M. Vidyasagar FRS Overview of Compressed Sensing What is Compressed Sensing? Main Results Construction of Measurement Matrices Some Topics Not Covered Outline 1 What is Compressed Sensing? 2 Main Results 3 Construction of Measurement Matrices Deterministic Approaches Probabilistic Approaches A Case Study 4 Some Topics Not Covered M. Vidyasagar FRS Overview of Compressed Sensing What is Compressed Sensing? Main Results Construction of Measurement Matrices Some Topics Not Covered Compressed Sensing: Basic Problem Formulation n Suppose x 2 R is known to be k-sparse, where k n. That is, jsupp(x)j ≤ k n, where supp denotes the support of a vector. However, it is not known which k components are nonzero. Can we recover x exactly by taking m n linear measurements of x? M. Vidyasagar FRS Overview of Compressed Sensing What is Compressed Sensing? Main Results Construction of Measurement Matrices Some Topics Not Covered Precise Problem Statement: Basic Define the set of k-sparse vectors n Σk = fx 2 R : jsupp(x)j ≤ kg: m×n Do there exist an integer m, a \measurement matrix" A 2 R , m n and a \demodulation map" ∆ : R ! R , such that ∆(Ax) = x; 8x 2 Σk? Note: Measurements are linear but demodulation map could be nonlinear.
    [Show full text]
  • Credit Spread Prediction Using Compressed Sensing
    CREDIT SPREAD PREDICTION USING COMPRESSED SENSING Graham L. Giller Data Science Research, Equity Division April 2017 What is Compressed Sensing? Parameter estimation technology developed by Tao, Candès, Donoho. When estimating coefficients of sparse linear systems, minimizing 퐿1 norms dominates traditional “least squares” analysis in accuracy. Only requires 표 푘푆 data points for perfect reconstruction of a noiseless 푆-sparse system of 푁 ≫ 푆 data points. 푘 ≈ 4, 12, ln 푁 … (depending on proof) Nyquist-Shannon sampling theorem says 2푁 ≫ 푘푆 Results are extendable to noisy systems. Applications: MRI Images, Radar Images, Factor Models! Why does it Work? (This due to Tao, who is very much smarter than me!) 2 2 Minimize 푥1 + 푥2 Minimize 푥1 + 푥2 Such that 푎푥1 + 푏푥2 = 푐 (푎, 푏, 푐 are random) Such that 푎푥1 + 푏푥2 = 푐 푥 푥 And 1 is sparse (lies on axis). And 1 is sparse. 푥2 푥2 Probability of sparse solution: Probability of sparse solution: 4 ∞ i.e. almost never. 4 4 i.e. almost always. Interpolating Polynomials Joseph Louis Lagrange, 1795 푁 10 푖−1 푝푁 푥 = 휃푖푥 푖=1 8 6 4 2 0 푁 -2 For any set of 푁 pairs 푦푖, 푥푖 푖=1 I can create a polynomial of order 푁 − 1, 푝푁 푥 -4 such that 푦푖 ≡ 푝푁 푥푖 ∀ 푖 i.e. The polynomial reproduces the observed data with zero error! -6 0 1 2 3 4 5 6 7 Sparse Interpolating Polynomials 10 5 0 -5 -10 If 푦푖 = 푝푆 푥푖 ∀ 푖 and 휽 0 = 푆 ≪ 푁 we have a sparse interpolating polynomial. -15 From an estimation point-of-view, this is a linear system since the polynomial is linear in the 2 푁−1 푇 -20 parameters vector: 푝푁 푥 = 1 푥 푥 ⋯ 푥 휽 Thus this problem is amenable to compressed sensing.
    [Show full text]
  • Can Compressed Sensing Beat the Nyquist Sampling Rate?
    Can compressed sensing beat the Nyquist sampling rate? Leonid P. Yaroslavsky, Dept. of Physical Electronics, School of Electrical Engineering, Tel Aviv University, Tel Aviv 699789, Israel Data saving capability of “Compressed sensing (sampling)” in signal discretization is disputed and found to be far below the theoretical upper bound defined by the signal sparsity. On a simple and intuitive example, it is demonstrated that, in a realistic scenario for signals that are believed to be sparse, one can achieve a substantially larger saving than compressing sensing can. It is also shown that frequent assertions in the literature that “Compressed sensing” can beat the Nyquist sampling approach are misleading substitution of terms and are rooted in misinterpretation of the sampling theory. Keywords: sampling; sampling theorem; compressed sensing. Paper 150246C received Feb. 26, 2015; accepted for publication Jun. 26, 2015. This letter is motivated by recent OPN publications1,2 that Therefore, the inverse of the signal sparsity SPRS=K/N is the advertise wide use in optical sensing of “Compressed sensing” theoretical upper bound of signal dimensionality reduction (CS), a new method of digital image formation that has achievable by means of signal sparse approximation. This obtained considerable attention after publications3-7. This relationship is plotted in Fig. 1 (solid line). attention is driven by such assertions in numerous The upper bound of signal dimensionality reduction publications as “CS theory asserts that one can recover CSDRF of signals with sparsity SSP achievable by CS can be certain signals and images from far fewer samples or evaluated from the relationship: measurements than traditional methods use”6, “Beating 1/CSDRF>C(2SSPlogCSDRF) provided in Ref.
    [Show full text]
  • Compressive? Can It Beat the Nyquist Sampling Approach?
    Is “Compressed Sensing” compressive? Can it beat the Nyquist Sampling Approach? Leonid P. Yaroslavsky, Dept. of Physical Electronics, School of Electrical Engineering, Tel Aviv University, Tel Aviv 699789, Israel *Corresponding author: [email protected] Data compression capability of “Compressed sensing (sampling)” in signal discretization is numerically evaluated and found to be far from the theoretical upper bound defined by signal sparsity. It is shown that, for the cases when ordinary sampling with subsequent data compression is prohibitive, there is at least one more efficient, in terms of data compression capability, and more simple and intuitive alternative to Compressed sensing: random sparse sampling and restoration of image band- limited approximations based on energy compaction capability of transforms. It is also shown that assertions that “Compressed sensing” can beat the Nyquist sampling approach are rooted in misinterpretation of the sampling theory. OCIS codes: 100.0100, 100.2000, 100.3010, 100.3055, 110.6980. This letter is motivated by recent OPN publications optimal in terms of mean squared approximation error. ([1], [2]) that advertise wide use in optical sensing of Therefore, sampling interval, or, correspondingly, “Compressed sensing” (CS), a new method of image digital sampling rate, are not signal attributes, but rather a image formation that has obtained considerable attention matter of convention on which reconstruction accuracy is after publications ([3] - [7]). This attention is driven by acceptable. such assertions in numerous publications as “CS theory For instance, sampling intervals for image sampling asserts that one can recover certain signals and images are conventionally chosen on the base of a decision on how from far fewer samples or measurements than traditional many pixels are sufficient for sampling smallest objects or methods use” ([6]), ”compressed sensing theory suggests object borders to secure their resolving, localization or that one can recover a scene at a higher resolution than is recognition.
    [Show full text]
  • Compressed Sensing for Wireless Communications
    Compressed Sensing for Wireless Communications : Useful Tips and Tricks ♭ ♮ ♮ Jun Won Choi∗, Byonghyo Shim , Yacong Ding , Bhaskar Rao , Dong In Kim† ∗Hanyang University, Seoul, Korea ♭Seoul National University, Seoul, Korea ♮University of California at San Diego, CA, USA †Sungkyunkwan University, Suwon, Korea Abstract As a paradigm to recover the sparse signal from a small set of linear measurements, compressed sensing (CS) has stimulated a great deal of interest in recent years. In order to apply the CS techniques to wireless communication systems, there are a number of things to know and also several issues to be considered. However, it is not easy to come up with simple and easy answers to the issues raised while carrying out research on CS. The main purpose of this paper is to provide essential knowledge and useful tips that wireless communication researchers need to know when designing CS-based wireless systems. First, we present an overview of the CS technique, including basic setup, sparse recovery algorithm, and performance guarantee. Then, we describe three distinct subproblems of CS, viz., sparse estimation, support identification, and sparse detection, with various wireless communication applications. We also address main issues encountered in the design of CS-based wireless communication systems. These include potentials and limitations of CS techniques, useful tips that one should be aware of, subtle arXiv:1511.08746v3 [cs.IT] 20 Dec 2016 points that one should pay attention to, and some prior knowledge to achieve better performance. Our hope is that this article will be a useful guide for wireless communication researchers and even non- experts to grasp the gist of CS techniques.
    [Show full text]