Sparsity, Randomness and Compressed Sensing

Sparsity, Randomness and Compressed Sensing

<p>Sparsity, Randomness and <a href="/tags/Compressed_sensing/" rel="tag">Compressed Sensing</a> </p><p>Petros Boufounos Mitsubishi Electric Research Labs [email protected] Sparsity Why Sparsity </p><p>• Natural data and signals exhibit structure </p><p>• Sparsity o�en captures that structure </p><p>• Very general signal model </p><p>• Computa�onally tractable </p><p>• Wide range of applica�ons in signal acquisi�on, processing, and transmission Signal Representations Signal example: Images </p><p>• 2‐D func�on </p><p>• Idealized view some func�on space defined over Signal example: Images </p><p>• 2‐D func�on </p><p>• Idealized view some func�on space defined over </p><p>• In prac�ce </p><p> ie: an <a href="/tags/Matrix_(mathematics)/" rel="tag">matrix</a> Signal example: Images </p><p>• 2‐D func�on </p><p>• Idealized view some func�on space defined over </p><p>• In prac�ce </p><p> ie: an matrix (pixel average) Signal Models Classical Model: Signal lies in a linear vector space x1 (e.g. bandlimited functions) X Sparse Model: Signals of interest are often sparse or compressible x2 Signal Transform x3 </p><p>Image <a href="/tags/Wavelet/" rel="tag">Wavelet</a> </p><p>Bat Sonar Gabor/ Chirp STFT </p><p> i.e., very few large coefficients, many close to zero. Sparse Signal Models </p><p> x1 x1 X X Sparse signals have x2 few non-zero coefficients x2 x3 x3 1-sparse 2-sparse </p><p> x1 </p><p>Compressible signals have few significant coefficients. x2 The coefficients decay as a power law. x3 </p><p>Compressible (p ball, p<1) Sparse Approximation Computational Harmonic Analysis </p><p>• Representa�on </p><p> coefficients <a href="/tags/Basis_(linear_algebra)/" rel="tag">basis</a>, frame </p><p>• Analysis: study through structure of should extract features of interest </p><p>• Approxima�on: uses just a few terms exploit sparsity of Wavelet Transform Sparsity </p><p>• Many (blue) Sparseness ⇒ Approximation </p><p> few big </p><p> many small </p><p> sorted index Linear Approximation </p><p> index Linear Approximation </p><p>• ‐term approxima�on: use “first” </p><p> index Nonlinear Approximation </p><p>• ‐term approxima�on: use largest independently </p><p>• Greedy / thresholding </p><p> few big sorted index Error Approximation Rates </p><p> as </p><p>• Op�mize asympto�c error decay rate </p><p>• Nonlinear approxima�on works be�er than linear Compression is Approximation </p><p>• Lossy compression of an image creates an approxima�on </p><p> coefficients basis, frame </p><p> quan�ze to total bits <a href="/tags/Sparse_approximation/" rel="tag">Sparse approximation</a> ≠ Compression </p><p>• Sparse approxima�on chooses coefficients but does not quan�ze or worry about their loca�ons </p><p> threshold Location, Location, Location </p><p>• Nonlinear approxima�on selects largest to minimize error (easy – threshold) </p><p>• Compression algorithm must encode both a set of and their loca�ons (harder) Exposing Sparsity Spikes and Sinusoids example </p><p>Example Signal Model: Sinusoidal with a few spikes. </p><p> f B a DCT Basis: Spikes and Sinusoids Dictionary </p><p> f D a </p><p>DCT basis Impulses </p><p>Lost Uniqueness!! Overcomplete Dictionaries </p><p>Strategy: Improve sparse approximation by constructing a large dictionary. f D a </p><p>How do we design a dictionary? Dictionary Design </p><p>Wavelets DCT, DFT Edgelets, curvelets, … </p><p>Impulse Basis Oversampling Frame … … </p><p>Dictionary D </p><p>Can we just throw in the bucket everything we know? Dictionary Design Considerations </p><p>• Dic�onary Size: – Computa�on and storage increases with size </p><p>• Fast Transforms: – FFT, DCT, FWT, etc. drama�cally decrease computa�on and storage </p><p>• Coherence: – Similarity in elements makes solu�on harder Dictionary Coherence </p><p>Two candidate dictionaries: </p><p>D1 D2 BAD! </p><p>Intuition: D2 has too many similar elements. It is very coherent. </p><p>Coherence (similarity) between elements: 〈d1,d2〉 </p><p>Dictionary coherence: μ=maxi,j〈di,dj〉 Incoherent Bases • “Mix” well the signal components – Impulses and Fourier Basis – Anything and Random Gaussian – Anything and Random 0‐1 basis Computing Sparse Representations Thresholding </p><p>Zero out Compute set of coefficients small ones f D a </p><p> a=D†f </p><p>Computationally efficient Good for small and very incoherent dictionaries Matching Pursuit </p><p>Measure image Select largest Add to against dictionary correlation representation T ρ D f ak ← ak+ρk </p><p>ρk Compute residual </p><p> f ← f - ρkdk </p><p>〈dk,f 〉 = ρk </p><p>Iterate using residual Greedy Pursuits Family </p><p>• Several Varia�ons of MP: OMP, StOMP, ROMP, CoSaMP, Tree MP, … (You can create an AndrewMP if you work on it…) </p><p>• Some have provable guarantees </p><p>• Some improve dic�onary search </p><p>• Some improve coefficient selec�on CoSaMP (Compressive Sampling MP) </p><p>Add to Measure image Select location support set against dictionary of largest Ω = supp(ρ 2K ) T DT f ρ 2K correlations | ∪ Invert over support supp(ρ|2K) b = DΩ† f Truncate and compute residual T = supp(b ) |K a = b K 〈d ,f 〉 = ρ | k k r f Da ← − Iterate using residual Optimization (Basis Pursuit) </p><p>Sparse approximation: </p><p>Minimize non-zeros in representation s.t.: representation is close to signal </p><p> min ‖a‖0 s.t. f ≈ Da </p><p>Number of non-zeros Data Fidelity (sparsity measure) (approximation quality) </p><p>Combinatorial complexity. Very hard problem! Optimization (Basis Pursuit) </p><p>Sparse approximation: </p><p>Minimize non-zeros in representation s.t.: representation is close to signal </p><p> min ‖a‖0 s.t. f ≈ Da </p><p>Convex Relaxation </p><p> min ‖a‖1 s.t. f ≈ Da </p><p>Ploynomial complexity. Solved using <a href="/tags/Linear_programming/" rel="tag">linear programming</a>. Why l1 relaxation works </p><p> min ‖a‖1 s.t. f ≈ Da </p><p>Sparse solution </p><p> l1 “ball” </p><p> f = Da Basis Pursuits </p><p>• Have provable guarantees – Finds sparsest solu�on for incoherent dic�onaries </p><p>• Several variants in formula�on: BPDN, <a href="/tags/Lasso_(statistics)/" rel="tag">LASSO</a>, Dantzig selector, … </p><p>• Varia�ons on fidelity term and relaxa�on choice </p><p>• Several fast algorithms: FPC, GPSR, SPGL, … Compressed Sensing: Sensing, Sampling and Data Processing Data Acquisition • Usual acquisi�on methods sample signals uniformly – Time: A/D with microphones, geophones, hydrophones. – Space: CCD cameras, sensor arrays. </p><p>• Founda�on: Nyquist/Shannon sampling theory – Sample at twice the signal bandwidth. – Generally a projec�on to a complete basis that spans the signal space. Data Processing and Transmission </p><p>• Data processing steps: – Sample Densely Signal x, N coefficients </p><p>– Transform to an informa�ve domain (Fourier, Wavelet) </p><p>K<<N significant coefficients </p><p>– Process/Compress/Transmit Sets small coefficients to zero (sparsifica�on) Sparsity Model </p><p>• Signals can usually be compressed in some basis </p><p>• Sparsity: good prior in picking from a lot of candidates Compressive Sensing Principles </p><p> x1 X If a signal is sparse, do not waste effort sampling the empty space. x2 </p><p> x3 Instead, use fewer samples 1-sparse and allow ambiguity. </p><p> x1 Use the sparsity model to reconstruct X and uniquely resolve the ambiguity. x2 </p><p> x3 2-sparse Measuring Sparse Signals Compressive Measurements Ambiguity </p><p> x1 y1 </p><p> x2 x1 X Measurement x3 (Projection) X x2 y2 </p><p> x3 Reconstruction </p><p>Φ has rank M≪N </p><p>N = Signal dimensionality M = Number of measurements K = Signal sparsity (dimensionality of y) </p><p>N ≫ M ≳ K One Simple Question Geometry of Sparse Signal Sets Geometry: Embedding in RM Illustrative Example Example: 1-sparse signal y1=x2 </p><p>X N=3 Bad! y2 =x3 K=1 x1=0 M=2K=2 </p><p> x1 X </p><p> x2 y1=x1=x2 x3 X Bad! y2 =x3 Example: 1-sparse signal </p><p> y1=x2 N=3 x1 K=1 X M=2K=2 Good! y2 =x3 </p><p> x1 X y1=x1 x2 </p><p> x3 X Better! y2 </p><p> x3 x2 Restricted Isometry Property RIP as a “Stable” Embedding Verifying RIP Universality Property Universality Property Democracy ~ Φ ~ </p><p> measurements </p><p>Bad/lost/dropped measurements </p><p>• Measurements are democra�c [Davenport, Laska, Boufounos, Baraniuk] - They are all equally important - We can loose some arbitrarily, (i.e. an adversary can choose which ones) </p><p>~ • The Φ s�ll sa�sfies RIP (as long as we don’t drop too many) Reconstruction Requirements for Reconstruction </p><p>• Let x1, x2 be K‐sparse signals (I.e. x1-x2 is 2K‐sparse): • Mapping y=Φx is inver�ble for K‐sparse signals: </p><p>Φ(x1-x2)≠0 if x1≠x2 </p><p>• Mapping is robust for K‐sparse signals: </p><p>||Φ(x1-x2)||2≈|| x1-x2||2 </p><p>– Restricted Isometry Property (RIP): Φ preserves distance when projec�ng K‐sparse signals </p><p>• Guarantees there exists a unique K‐sparse signal explains the measurements, and is robust to noise. Reconstruction Ambiguity </p><p>• Solu�on should be consistent with measurements </p><p> xˆ s.t. y = Φxˆ or y ≈ Φxˆ • Projec�ons imply that an infinite number of solu�ons are consistent! </p><p>• Classical approach: use the pseudoinverse (minimize l2 norm) € • Compressive sensing approach: pick the sparsest. </p><p>• RIP guarantee: sparsest solu�on unique and reconstructs the signal. </p><p>Becomes a sparse approximation problem! Putting everything together Compressed Sensing Coming Together </p><p>Signal Structure (sparsity) </p><p>Reconstruc�on Measurement y ~ using sparse x y=Φx x approxima�on </p><p>Stable Embedding Non‐linear Reconstruc�on (random projec�ons) (Basis Pursuit, Matching Pursuit, CoSaMP, etc…) </p><p>• Signal model: Provides prior informa�on; allows undersampling </p><p>• Randomness: Provides robustness/stability; makes proofs easier </p><p>• Non‐linear reconstruc�on: Incorporates informa�on through computa�on Beyond: Extensions, Connec�ons, Generaliza�ons Sparsity Models Block Sparsity </p><p>Φ </p><p> sparse measurements signal </p><p> nonzero blocks of L </p><p> x Mixed l1/l2 norm—sum of l2 norms: ! Bi !2 i ! min xB 2 s.t. y Φx Basis pursuit becomes: x ! i ! ≈ i ! Blocks are not allowed to overlap Joint Sparsity </p><p> y Φ x N � L M L N measurements × L sparse signals in R</p><p>Sparse components per signal with common support </p><p>Mixed l1/l2 norm—sum of l2 norms: x(i, ) 2 ! · ! i ! Basis pursuit becomes: min x(i, ) 2 s.t. y Φx x ! · ! ≈ i ! Randomized Embeddings Stable Embeddings Johnson-Lindenstrauss Lemma Favorable JL Distributions Connecting JL to RIP Connecting JL to RIP </p><p>More? The tip of the iceberg Today’s lecture </p><p>Compressive Sensing Repository dsp.rice.edu/cs </p><p>Blog on CS nuit-blanche.blogspot.com/ </p><p>Yet to be discovered… Start working on it  </p>

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    0 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us