<<

Matched Filtering from Limited Frequency Samples

Armin Eftekhari Justin Romberg Michael Wakin

Colorado School of Mines Georgia Tech Compressive Sensing

x is K-sparse • Collect linear measurements y = Φx – random measurement operator Φ • Recover x from y by exploiting assumption of sparsity

sparse measurements signal

nonzero entries

[Candès et al., Donoho, …] Application 1: Medical Imaging

Fourier sampling Space ddiomain Fourier coefficients pattern

Backproj., 29.00dB Min TV, 34.23dB [CR] Application 2: Digital Photography

single photon detector

random pattern on DMD array

4096 pixels 1600 measurements (40%)

[with R. Baraniuk + Rice CS Team] Application 3: Analog-to-Digital Conversion

• Sampling analog at the information level

Nyquist samples Compressive samples Application 4: Sensor Networks

• Joint sparsity

• Distributed CS: measure separately, reconstruct jointly distributed source coding

• Robust, scalable Restricted Isometry Property (RIP)

• RIP requires: for all K-sparse x1 and x2,

K-planes • Stable embedding of the sparse signal family Proving that Random Matrices Work

• Goal is to prove that for all (2K)-sparse x ∈ RN

• Recast as a bound on a random process

where  is the set of all (2K)-sparse signals x with Bounding a Random Process

• Common techniques: - Dudley inequality relates the expected supremum of a random process to the geometry of its index set

- strong tail bounds control deviation from average

• Works for a variety of random matrix types - Gaussian and Fourier matrices [Rudelson and Vershynin] - circulant and Toeplitz matrices [Rauhut, Romberg, Tropp] - incoherent matrices [Candès and Plan] Low-Complexity Inference

• In many problems of interest,

information level ¿ sparsity level

• ElExample: unknown siilgnal parametititerizations

• Is it necessary to fully recover a signal in order to estima te some low-dimens iona l parameter ? - Can we somehow exploit the lower information level? - Can we exploit the concentration of measure phenomenon? Compressive

• Low-complexity inference - detection/classification [Haupt and Nowak; Davenport, W., et al.] - estimation (“smashed filtering”) [Davenport, W., et al.] • generic analysis based on stable manifold embeddings

• This talk: - focus on simple technique for estimating unknown signal translations from random measurements • efficient alternative to conventional matched filter designs - special case: pure tone estimation from random samples - what’s new • analysis sharply focused on the estimation statistics • analog front end Tone Estimation Motivating Scenario

• Analog sinusoid with unknown frequency ω0 ∈ 

• Observe M random samples in time

• tm ~ Uniform([-½, ½])

ejω0t1 ejω0t2 y = ⎡ . ⎤ . ⎢ ⎥ ⎢ ejω0tM ⎥ ⎢ ⎥ ⎣ ⎦

tM t1 t2 Least-Squares Estimation

• Recall the measurement model

ejω0t1 ejω0t2 y = ⎡ . ⎤ . ⎢ ⎥ ⎢ ejω0tM ⎥ ⎢ ⎥ ⎣ ⎦ • For every ω ∈ , consider the test vector ejωt1 ejωt2 ψω = ⎡ . ⎤ . ⎢ ⎥ ⎢ ejωtM ⎥ ⎢ ⎥ ⎣ ⎦ • Compute test statistics X ( ω )= y, ψ and let h ωi ω0 =argmaxXX(ω) ω Ω | | ∈ b Example

ejω0t1 ejωt1 ejω0t2 ejωt2 X(ω) = ⎡ . ⎤ , ⎡ . ⎤ * . . + ⎢ ⎥ ⎢ ⎥ ⎢ ejω0tM ⎥ ⎢ ejωtM ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦

X(ω), M =10

-60 -40 -20 0 20 40 60 ω Analytical Framework

• X(ω)= y,ψ is a random process indexed by ω. h ωi

X(ω), M =10

-60 -40 -20 0 20 40 60 ω Analytical Framework

• X(ω)= y,ψ is a random process indexed by ω. h ωi • X(ω) is an unbiased estimate of the true function: ω ω E[X(ω)] = Msinc 0 − 2 µ ¶ • At each frequency, the variance of the estimate decreases with M.

X(ω), M =10; E[X(ω)]

-60 -40 -20 0 20 40 60 ω X(ω),M =10; X(ω),M =20;

-60 -40 -20 0 20 40 60 -60 -40 -20 0 20 40 60 ω ω X(ω),M =50; X(ω),M = 1000;

-60 -40 -20 0 20 40 60 -60 -40 -20 0 20 40 60 18 ω ω Analytical Framework

• When will X(ω) peak at or near the correct ω0? • Can we bound the maximum (supremum) of XX((ωω)) EE[[XX((ωω)])] | − | over the infinite set of frequencies ω ∈ ?

X(ω), M =10; E[X(ω)]

-60 -40 -20 0 20 40 60 ω Chaining

Ω Chaining

ω Chaining

π (ω) 0 Ω

ω Chaining

π (ω) 0 Ω

π1(ω)

ω Chaining

π (ω) 0 Ω

π1(ω)

π2(ω)

ω Chaining

π (ω) 0 Ω

π1(ω)

π2(ω)

ω Chaining

π (ω) 0 Ω

π1(ω)

π2(ω)

ω Chaining • Consider the centered process Y (ω) = XX((ωω)) E[[XX((ωω))]] π (ω) − 0 Ω

π1(ω)

π2(ω)

ω Chaining • Consider the centered process Y (ω) = XX((ωω)) E[[XX((ωω))]] π (ω) − 0 Ω

π1(ω)

π2(ω)

ω

Y (ω)=Y (π (ω)) + (Y (π (ω)) Y (π (ω))) o j+1 − j j 0 X≥ Chaining • Consider the centered process Y (ω) = XX((ωω)) E[[XX((ωω))]] π (ω) − 0 Ω

π1(ω)

π2(ω)

ω

sup Y (ω) max Y (p0) + max Y (qj) Y (pj) ω Ω | | ≤ p0 Ω0 | | (pj ,qj) Lj | − | ∈ ∈ j 0 ∈ X≥ Modifying the Random Process • Recall the centered random process Y (ω) = XX((ωω)) E[[XX((ωω))]] − • Define an independent copy called Y’(ω) with an

independent set of random times {t’m} • Define the symmetric random process

• Modulate with a Rademacher (+/- 1) sequence Bounding the Random Process

• Conddditioned on times {tm}d{} and {t’m}, Hoeffding ’s inequality bounds Z’(ω) and its increments:

• Cha ini ng argument bdbounds supremum of Z’(ω):

• Careful union bound combines all of this to give: Finishing Steps

• After removing the conditioning on times {tm} and {t’m}, and relltiating Z’(ω) to Y(ω), we concl ud e ththtat Esup XX((ωω)) EE[[XX((ωω))]] =EsupY (ω) C M log Ω , ω Ω | − | ω Ω | | ≤ · | | ∈ ∈ p whereas the peak of E[X(ω)] scales with M.

• Slightly extending these arguments, we have sup XX((ωω)) EE[[XX((ωω))]] =sup Y (ω) C M log( Ω /δ) ω Ω | − | ω Ω | | ≤ · | | ∈ ∈ p with probability at least 1-. Estimation Accuracy

• From our bounds, we conclude that if M C log( Ω /δ), ≥ | | then with probability at least 1-, the peak of |X(ω)| will occur within the correct main lobe.

• If our observation interval has length T and we take M C log( Ω T /δ), ≥ | || | we are guaranteed a frequency resolution of 2π ω ω | 0 − 0| ≤ T | | with probability at leastb 1- Extensions

• Arbitrary unknown amplitude + Gaussian

tM t1 t2 Extension to Noisy Samples

• Observations • Random processes

y, ψ = A X(ω)+N(ω) h ωi ·

• Bounds

Esup A X(ω) E[A X(ω)] C A M log Ω ω Ω | · − · | ≤ · · ∈ p

Esup N(ω) C σn M log Ω ω Ω | | ≤ · · ∈ p Estimation Accuracy

•If σ2 M C max(log( Ω T ), log(2/δ)) n , ≥ · | || | · A 2 | | then with probability at least 1-2, the peak

ω0 =argmaxA X(ω)+N(ω) ω Ω | · | ∈ will have ab guaranteed a accuracy of 2π ω ω . | 0 − 0| ≤ T | | b • The amplitude A can then be accurately estimated via least-squares. Compressive Matched Filtering Exchanging Time and Frequency

• Known pulse template s0(t), unknown delay τ0 ∈ T Exchanging Time and Frequency

• Known pulse template s0(t), unknown delay τ0 ∈ T • Random samppqyles in frequency on  Exchanging Time and Frequency

• Known pulse template s0(t), unknown delay τ0 ∈ T • Random samppqyles in frequency on 

s0(ω)

b Exchanging Time and Frequency

• Known pulse template s0(t), unknown delay τ0 ∈ T • Random samppqyles in frequency on 

s0(ω)

b

• Compute test statistics X ( τ )= y, ψ and let h τ i τ0 = arg max X(τ) τ T | | ∈ b Experiment: Narrow Gaussian Pulse Experiment: Narrow Gaussian Pulse Experiment: Narrow Gaussian Pulse Matched Filter Guarantees

• Measurement bounds again scale with

1 log( Ω T ) SNR− | || | · times a factor depending on uniformity of spectrum

s0(ω)

b Interpreting the Guarantee

• When M ~ , the compressive matched filter is as robust to noise as traditional Nyquist sampling • However, when noise is small this gives us a principled way to undersample without the risk of aliasing

s0(ω)

b Conclusions

• Random measurements – recover low-complexity signals – answer low-complexity questions

• Compressive matched filter – simple estimation – analytical framework based on random processes – robust performance with sub-Nyquist measurements – measurement bounds agnostic to sparsity level – could incorporate into larger algorithm

• More is known about these problems -spppectral compressive sensin g [[,Duarte, Baraniuk] - delay estimation using unions of subspaces [Gedalyahu, Eldar]