Structured Compressed Sensing - Using Patterns in Sparsity
Johannes Maly
Technische Universit¨atM¨unchen, Department of Mathematics, Chair of Applied Numerical Analysis
CoSIP Workshop, Berlin, Dezember 9, 2016 Overview
Classical Compressed Sensing
Structures in Sparsity I - Joint Sparsity
Structures in Sparsity II - Union of Subspaces
Conclusion
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 2 of 44 Classical Compressed Sensing Overview
Classical Compressed Sensing
Structures in Sparsity I - Joint Sparsity
Structures in Sparsity II - Union of Subspaces
Conclusion
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 3 of 44 Classical Compressed Sensing Compressed Sensing
N Let x ∈ R be some unknown k-sparse signal. Then, x can be recovered from few linear measurements
y = A · x
m×N m where A ∈ R is a (random) matrix, y ∈ R is the vector of measurements and m N.
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 4 of 44 Classical Compressed Sensing Compressed Sensing
N Let x ∈ R be some unknown k-sparse signal. Then, x can be recovered from few linear measurements
y = A · x
m×N m where A ∈ R is a (random) matrix, y ∈ R is the vector of measurements and m N. It is sufficient to have N m Ck log & k
measurements to recover x (with high probability) by greedy strategies, e.g. Orthogonal Matching Pursuit, or convex optimization, e.g. `1-minimization.
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 5 of 44 OMP INPUT: matrix A, measurement vector y. INIT: T0 = ∅, x0 = 0. ITERATION: until stopping criterion is met
T jn+1 ← arg maxj∈[N] (A (y − Axn))j , Tn+1 ← Tn ∪ {jn+1},
xn+1 ← arg minz∈RN {ky − Azk2, supp(z) ⊂ Tn+1} .
OUTPUT: then ˜-sparse approximationx ˆ := xn˜
Classical Compressed Sensing Orthogonal Matching Pursuit OMP is a simple algorithm that tries to find the true support of x by k greedy steps. It selects stepwise those columns of A that have the highest correlation with the measurements to build a support estimate T .
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 6 of 44 Classical Compressed Sensing Orthogonal Matching Pursuit OMP is a simple algorithm that tries to find the true support of x by k greedy steps. It selects stepwise those columns of A that have the highest correlation with the measurements to build a support estimate T . OMP INPUT: matrix A, measurement vector y. INIT: T0 = ∅, x0 = 0. ITERATION: until stopping criterion is met
T jn+1 ← arg maxj∈[N] (A (y − Axn))j , Tn+1 ← Tn ∪ {jn+1},
xn+1 ← arg minz∈RN {ky − Azk2, supp(z) ⊂ Tn+1} .
OUTPUT: then ˜-sparse approximationx ˆ := xn˜
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 6 of 44 ha1, yi ha1, a`i T . . ⇒A · y = . = . ⇒ j1 = `. haN , yi haN , a`i
Classical Compressed Sensing Orthogonal Matching Pursuit OMP is a simple algorithm that tries to find the true support of x by k greedy steps. It selects stepwise those columns of A that have the highest correlation with the measurements to build a support estimate T . 0 . | | . A · e` = a1 ··· aN · 1 = a` =: y | | . . 0
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 7 of 44 ⇒ j1 = `.
Classical Compressed Sensing Orthogonal Matching Pursuit OMP is a simple algorithm that tries to find the true support of x by k greedy steps. It selects stepwise those columns of A that have the highest correlation with the measurements to build a support estimate T . 0 . | | . A · e` = a1 ··· aN · 1 = a` =: y | | . . 0
ha1, yi ha1, a`i T . . ⇒A · y = . = . haN , yi haN , a`i
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 7 of 44 Classical Compressed Sensing Orthogonal Matching Pursuit OMP is a simple algorithm that tries to find the true support of x by k greedy steps. It selects stepwise those columns of A that have the highest correlation with the measurements to build a support estimate T . 0 . | | . A · e` = a1 ··· aN · 1 = a` =: y | | . . 0
ha1, yi ha1, a`i T . . ⇒A · y = . = . ⇒ j1 = `. haN , yi haN , a`i
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 7 of 44 Structures in Sparsity I - Joint Sparsity Overview
Classical Compressed Sensing
Structures in Sparsity I - Joint Sparsity
Structures in Sparsity II - Union of Subspaces
Conclusion
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 8 of 44 but L different measurements y1,..., yL given by L different signals x1,..., xL sharing a common support TX ⊂ [N], |TX | ≤ k
| | | | A · x1 ··· xL = y1 ··· yL ⇔ A · X = Y , | | | |
N×L which can be written into matrices X ∈ R , k-row-sparse, and m×L Y ∈ R .
Structures in Sparsity I - Joint Sparsity Joint Sparsity with Multiple Measurement Vectors
We want now to improve on classical CS by using additional structure in sparsity. Possibly we not only have one measurement vector y from one sparse signal
A · x = y,
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 9 of 44 Structures in Sparsity I - Joint Sparsity Joint Sparsity with Multiple Measurement Vectors
We want now to improve on classical CS by using additional structure in sparsity. Possibly we not only have one measurement vector y from one sparse signal
A · x = y,
but L different measurements y1,..., yL given by L different signals x1,..., xL sharing a common support TX ⊂ [N], |TX | ≤ k
| | | | A · x1 ··· xL = y1 ··· yL ⇔ A · X = Y , | | | |
N×L which can be written into matrices X ∈ R , k-row-sparse, and m×L Y ∈ R .
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 9 of 44 Based on the spark a necessary and sufficient condition for the measurements y = Ax to uniquely determine each k-sparse vector x is given by
spark(A) k < , 2 which leads to the requirement m ≥ 2k.
Structures in Sparsity I - Joint Sparsity MMV in Theory
Definition (spark(A)) m×N The spark of a matrix A ∈ R is the smallest number of linearly dependent columns of A. It fulfills spark(A) ≤ m + 1.
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 10 of 44 Structures in Sparsity I - Joint Sparsity MMV in Theory
Definition (spark(A)) m×N The spark of a matrix A ∈ R is the smallest number of linearly dependent columns of A. It fulfills spark(A) ≤ m + 1.
Based on the spark a necessary and sufficient condition for the measurements y = Ax to uniquely determine each k-sparse vector x is given by
spark(A) k < , 2 which leads to the requirement m ≥ 2k.
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 10 of 44 Structures in Sparsity I - Joint Sparsity MMV in Theory
Definition (spark(A)) m×N The spark of a matrix A ∈ R is the smallest number of linearly dependent columns of A. It fulfills spark(A) ≤ m + 1.
In the MMV case a sufficient condition for the measurements Y = AX to uniquely determine the jointly k-sparse matrix X is
spark(A)−1 + rank(X ) k < , 2 which leads to the requirement m ≥ k + 1 if spark(A) and rank(X ) are optimal, i.e. spark(A) = m + 1 and rank(X ) = k. (see [1])
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 11 of 44 Structures in Sparsity I - Joint Sparsity Simultaneous OMP
SOMP uses a small modification of OMP to benefit from several different measurement vectors. Choosing the support index by the residual’s largest row norm shall improve the support recovery.
OMP INPUT: matrix A, measurement vector y. INIT: T0 = ∅, x0 = 0. ITERATION: until stopping criterion is met
T jn+1 ← arg maxj∈[N] (A (y − Axn))j , Tn+1 ← Tn ∪ {jn+1},
xn+1 ← arg minz∈RN {ky − Azk2, supp(z) ⊂ Tn+1} .
OUTPUT: then ˜-sparse approximationx ˆ := xn˜
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 12 of 44 Structures in Sparsity I - Joint Sparsity Simultaneous OMP
SOMP uses a small modification of OMP to benefit from several different measurement vectors. Choosing the support index by the residual’s largest row norm shall improve the support recovery.
SOMP INPUT: matrix A, measurement vectors Y = y1,..., yL. INIT: T0 = ∅, X0 = 0. ITERATION: until stopping criterion is met
T jn+1 ← arg maxj∈[N] (A (Y − AXn))j p, Tn+1 ← Tn ∪ {jn+1},
Xn+1 ← arg minZ∈RN×L {kY − AZk2, supp(Z) ⊂ Tn+1} .
OUTPUT: then ˜ row-sparse approximation Xˆ := Xn˜
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 13 of 44 Structures in Sparsity I - Joint Sparsity SOMP - Numerics
SOMP comparison with N = 256, m = 32 and L = 1, 2, 4, 8, 16, 32 (from left to right); Source: [2]
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 14 of 44 Structures in Sparsity II - Union of Subspaces Overview
Classical Compressed Sensing
Structures in Sparsity I - Joint Sparsity
Structures in Sparsity II - Union of Subspaces
Conclusion
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 15 of 44 N m ≈ k log k
Structures in Sparsity II - Union of Subspaces Why Structure?
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 16 of 44 Structures in Sparsity II - Union of Subspaces Why Structure?
N m ≈ k log k
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 16 of 44 m ≈ ?
Structures in Sparsity II - Union of Subspaces Why Structure?
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 17 of 44 Structures in Sparsity II - Union of Subspaces Why Structure?
m ≈ ?
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 17 of 44 This set of k-sparse signals can be decomposed into a union of k-dimensional subspaces
[ [ N U = UT := {z ∈ R : supp(z) = T }. T ∈T T ⊂[N], |T |=k
Structures in Sparsity II - Union of Subspaces From Sparsity to Subspaces
If x ∈ R is k-sparse, it belongs to the set N U := {z ∈ R : | supp(z)| ≤ k}.
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 18 of 44 Structures in Sparsity II - Union of Subspaces From Sparsity to Subspaces
If x ∈ R is k-sparse, it belongs to the set N U := {z ∈ R : | supp(z)| ≤ k}. This set of k-sparse signals can be decomposed into a union of k-dimensional subspaces
[ [ N U = UT := {z ∈ R : supp(z) = T }. T ∈T T ⊂[N], |T |=k
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 18 of 44 Definition (k-sparse RIP) m×N A matrix A ∈ R has the k-sparse Restricted Isometry Property N with constant δ if, for all k-sparse x ∈ R ,
2 2 2 (1 − δ)kxk2 ≤ kAxk2 ≤ (1 + δ)kxk2.
m×N N If A ∈ R has i.i.d. Gaussian entries and m ≈ O(k log( k )), it satisfies the k-sparse RIP with high probability.
Structures in Sparsity II - Union of Subspaces From Sparsity to Subspaces
If x ∈ R is k-sparse, it belongs to the set
[ N U := {z ∈ R : supp(z) = T }. T ⊂[N], |T |=k
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 19 of 44 Structures in Sparsity II - Union of Subspaces From Sparsity to Subspaces
If x ∈ R is k-sparse, it belongs to the set
[ N U := {z ∈ R : supp(z) = T }. T ⊂[N], |T |=k
Definition (k-sparse RIP) m×N A matrix A ∈ R has the k-sparse Restricted Isometry Property N with constant δ if, for all k-sparse x ∈ R ,
2 2 2 (1 − δ)kxk2 ≤ kAxk2 ≤ (1 + δ)kxk2.
m×N N If A ∈ R has i.i.d. Gaussian entries and m ≈ O(k log( k )), it satisfies the k-sparse RIP with high probability.
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 19 of 44 Definition (U-RIP) m×N A matrix A ∈ R has the U Restricted Isometry Property with constant δ if, for all x ∈ U,
2 2 2 (1 − δ)kxk2 ≤ kAxk2 ≤ (1 + δ)kxk2.
Structures in Sparsity II - Union of Subspaces From Sparsity to Subspaces
Let x ∈ R belong to the set [ U := UT T ∈T
which is decomposed in |T | subspaces of dimension D.
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 20 of 44 Structures in Sparsity II - Union of Subspaces From Sparsity to Subspaces
Let x ∈ R belong to the set [ U := UT T ∈T
which is decomposed in |T | subspaces of dimension D. Definition (U-RIP) m×N A matrix A ∈ R has the U Restricted Isometry Property with constant δ if, for all x ∈ U,
2 2 2 (1 − δ)kxk2 ≤ kAxk2 ≤ (1 + δ)kxk2.
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 20 of 44 Structures in Sparsity II - Union of Subspaces From Sparsity to Subspaces
Let x ∈ R belong to the set [ U := UT T ∈T
which is decomposed in |T | subspaces of dimension D. Theorem (Blumensath, Davies [5]) m×N Let A ∈ R be a matrix with i.i.d. subgaussian entries and t > 0. If 2 12 m ≥ log(2|T |) + D log + t , cδ δ
then A has the U-RIP with probability at least 1 − e−t .
Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 21 of 44 Structures in Sparsity II - Union of Subspaces From Sparsity to Subspaces
Let x ∈ R belong to the set [ U := UT T ∈T
which is decomposed in |T | subspaces of dimension D. Theorem (Blumensath, Davies [5]) m×N Let A ∈ R be a matrix with i.i.d. subgaussian entries and t > 0. If