Structured - Using Patterns in Sparsity

Johannes Maly

Technische Universit¨atM¨unchen, Department of Mathematics, Chair of Applied Numerical Analysis

[email protected]

CoSIP Workshop, Berlin, Dezember 9, 2016 Overview

Classical Compressed Sensing

Structures in Sparsity I - Joint Sparsity

Structures in Sparsity II - Union of Subspaces

Conclusion

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 2 of 44 Classical Compressed Sensing Overview

Classical Compressed Sensing

Structures in Sparsity I - Joint Sparsity

Structures in Sparsity II - Union of Subspaces

Conclusion

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 3 of 44 Classical Compressed Sensing Compressed Sensing

N Let x ∈ R be some unknown k-sparse signal. Then, x can be recovered from few linear measurements

y = A · x

m×N m where A ∈ R is a (random) , y ∈ R is the vector of measurements and m  N.

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 4 of 44 Classical Compressed Sensing Compressed Sensing

N Let x ∈ R be some unknown k-sparse signal. Then, x can be recovered from few linear measurements

y = A · x

m×N m where A ∈ R is a (random) matrix, y ∈ R is the vector of measurements and m  N. It is sufficient to have N  m Ck log & k

measurements to recover x (with high probability) by greedy strategies, e.g. Orthogonal Matching Pursuit, or convex optimization, e.g. `1-minimization.

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 5 of 44 OMP INPUT: matrix A, measurement vector y. INIT: T0 = ∅, x0 = 0. ITERATION: until stopping criterion is met

T jn+1 ← arg maxj∈[N] (A (y − Axn))j , Tn+1 ← Tn ∪ {jn+1},

xn+1 ← arg minz∈RN {ky − Azk2, supp(z) ⊂ Tn+1} .

OUTPUT: then ˜-sparse approximationx ˆ := xn˜

Classical Compressed Sensing Orthogonal Matching Pursuit OMP is a simple algorithm that tries to find the true support of x by k greedy steps. It selects stepwise those columns of A that have the highest correlation with the measurements to build a support estimate T .

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 6 of 44 Classical Compressed Sensing Orthogonal Matching Pursuit OMP is a simple algorithm that tries to find the true support of x by k greedy steps. It selects stepwise those columns of A that have the highest correlation with the measurements to build a support estimate T . OMP INPUT: matrix A, measurement vector y. INIT: T0 = ∅, x0 = 0. ITERATION: until stopping criterion is met

T jn+1 ← arg maxj∈[N] (A (y − Axn))j , Tn+1 ← Tn ∪ {jn+1},

xn+1 ← arg minz∈RN {ky − Azk2, supp(z) ⊂ Tn+1} .

OUTPUT: then ˜-sparse approximationx ˆ := xn˜

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 6 of 44     ha1, yi ha1, a`i T  .   .  ⇒A · y =  .  =  .  ⇒ j1 = `. haN , yi haN , a`i

Classical Compressed Sensing Orthogonal Matching Pursuit OMP is a simple algorithm that tries to find the true support of x by k greedy steps. It selects stepwise those columns of A that have the highest correlation with the measurements to build a support estimate T . 0   . | | .   A · e` = a1 ··· aN  · 1 = a` =: y   | | . . 0

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 7 of 44 ⇒ j1 = `.

Classical Compressed Sensing Orthogonal Matching Pursuit OMP is a simple algorithm that tries to find the true support of x by k greedy steps. It selects stepwise those columns of A that have the highest correlation with the measurements to build a support estimate T . 0   . | | .   A · e` = a1 ··· aN  · 1 = a` =: y   | | . . 0

    ha1, yi ha1, a`i T  .   .  ⇒A · y =  .  =  .  haN , yi haN , a`i

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 7 of 44 Classical Compressed Sensing Orthogonal Matching Pursuit OMP is a simple algorithm that tries to find the true support of x by k greedy steps. It selects stepwise those columns of A that have the highest correlation with the measurements to build a support estimate T . 0   . | | .   A · e` = a1 ··· aN  · 1 = a` =: y   | | . . 0

    ha1, yi ha1, a`i T  .   .  ⇒A · y =  .  =  .  ⇒ j1 = `. haN , yi haN , a`i

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 7 of 44 Structures in Sparsity I - Joint Sparsity Overview

Classical Compressed Sensing

Structures in Sparsity I - Joint Sparsity

Structures in Sparsity II - Union of Subspaces

Conclusion

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 8 of 44 but L different measurements y1,..., yL given by L different signals x1,..., xL sharing a common support TX ⊂ [N], |TX | ≤ k

 | |   | |  A · x1 ··· xL = y1 ··· yL ⇔ A · X = Y , | | | |

N×L which can be written into matrices X ∈ R , k-row-sparse, and m×L Y ∈ R .

Structures in Sparsity I - Joint Sparsity Joint Sparsity with Multiple Measurement Vectors

We want now to improve on classical CS by using additional structure in sparsity. Possibly we not only have one measurement vector y from one sparse signal

A · x = y,

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 9 of 44 Structures in Sparsity I - Joint Sparsity Joint Sparsity with Multiple Measurement Vectors

We want now to improve on classical CS by using additional structure in sparsity. Possibly we not only have one measurement vector y from one sparse signal

A · x = y,

but L different measurements y1,..., yL given by L different signals x1,..., xL sharing a common support TX ⊂ [N], |TX | ≤ k

 | |   | |  A · x1 ··· xL = y1 ··· yL ⇔ A · X = Y , | | | |

N×L which can be written into matrices X ∈ R , k-row-sparse, and m×L Y ∈ R .

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 9 of 44 Based on the a necessary and sufficient condition for the measurements y = Ax to uniquely determine each k-sparse vector x is given by

spark(A) k < , 2 which leads to the requirement m ≥ 2k.

Structures in Sparsity I - Joint Sparsity MMV in Theory

Definition (spark(A)) m×N The spark of a matrix A ∈ R is the smallest number of linearly dependent columns of A. It fulfills spark(A) ≤ m + 1.

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 10 of 44 Structures in Sparsity I - Joint Sparsity MMV in Theory

Definition (spark(A)) m×N The spark of a matrix A ∈ R is the smallest number of linearly dependent columns of A. It fulfills spark(A) ≤ m + 1.

Based on the spark a necessary and sufficient condition for the measurements y = Ax to uniquely determine each k-sparse vector x is given by

spark(A) k < , 2 which leads to the requirement m ≥ 2k.

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 10 of 44 Structures in Sparsity I - Joint Sparsity MMV in Theory

Definition (spark(A)) m×N The spark of a matrix A ∈ R is the smallest number of linearly dependent columns of A. It fulfills spark(A) ≤ m + 1.

In the MMV case a sufficient condition for the measurements Y = AX to uniquely determine the jointly k- X is

spark(A)−1 + rank(X ) k < , 2 which leads to the requirement m ≥ k + 1 if spark(A) and rank(X ) are optimal, i.e. spark(A) = m + 1 and rank(X ) = k. (see [1])

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 11 of 44 Structures in Sparsity I - Joint Sparsity Simultaneous OMP

SOMP uses a small modification of OMP to benefit from several different measurement vectors. Choosing the support index by the residual’s largest row norm shall improve the support recovery.

OMP INPUT: matrix A, measurement vector y. INIT: T0 = ∅, x0 = 0. ITERATION: until stopping criterion is met

T jn+1 ← arg maxj∈[N] (A (y − Axn))j , Tn+1 ← Tn ∪ {jn+1},

xn+1 ← arg minz∈RN {ky − Azk2, supp(z) ⊂ Tn+1} .

OUTPUT: then ˜-sparse approximationx ˆ := xn˜

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 12 of 44 Structures in Sparsity I - Joint Sparsity Simultaneous OMP

SOMP uses a small modification of OMP to benefit from several different measurement vectors. Choosing the support index by the residual’s largest row norm shall improve the support recovery.

SOMP INPUT: matrix A, measurement vectors Y = y1,..., yL. INIT: T0 = ∅, X0 = 0. ITERATION: until stopping criterion is met

T jn+1 ← arg maxj∈[N] (A (Y − AXn))j p, Tn+1 ← Tn ∪ {jn+1},

Xn+1 ← arg minZ∈RN×L {kY − AZk2, supp(Z) ⊂ Tn+1} .

OUTPUT: then ˜ row- Xˆ := Xn˜

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 13 of 44 Structures in Sparsity I - Joint Sparsity SOMP - Numerics

SOMP comparison with N = 256, m = 32 and L = 1, 2, 4, 8, 16, 32 (from left to right); Source: [2]

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 14 of 44 Structures in Sparsity II - Union of Subspaces Overview

Classical Compressed Sensing

Structures in Sparsity I - Joint Sparsity

Structures in Sparsity II - Union of Subspaces

Conclusion

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 15 of 44 N  m ≈ k log k

Structures in Sparsity II - Union of Subspaces Why Structure?

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 16 of 44 Structures in Sparsity II - Union of Subspaces Why Structure?

N  m ≈ k log k

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 16 of 44 m ≈ ?

Structures in Sparsity II - Union of Subspaces Why Structure?

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 17 of 44 Structures in Sparsity II - Union of Subspaces Why Structure?

m ≈ ?

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 17 of 44 This set of k-sparse signals can be decomposed into a union of k-dimensional subspaces

[ [ N U = UT := {z ∈ R : supp(z) = T }. T ∈T T ⊂[N], |T |=k

Structures in Sparsity II - Union of Subspaces From Sparsity to Subspaces

If x ∈ R is k-sparse, it belongs to the set N U := {z ∈ R : | supp(z)| ≤ k}.

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 18 of 44 Structures in Sparsity II - Union of Subspaces From Sparsity to Subspaces

If x ∈ R is k-sparse, it belongs to the set N U := {z ∈ R : | supp(z)| ≤ k}. This set of k-sparse signals can be decomposed into a union of k-dimensional subspaces

[ [ N U = UT := {z ∈ R : supp(z) = T }. T ∈T T ⊂[N], |T |=k

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 18 of 44 Definition (k-sparse RIP) m×N A matrix A ∈ R has the k-sparse Restricted Isometry Property N with constant δ if, for all k-sparse x ∈ R ,

2 2 2 (1 − δ)kxk2 ≤ kAxk2 ≤ (1 + δ)kxk2.

m×N N If A ∈ R has i.i.d. Gaussian entries and m ≈ O(k log( k )), it satisfies the k-sparse RIP with high probability.

Structures in Sparsity II - Union of Subspaces From Sparsity to Subspaces

If x ∈ R is k-sparse, it belongs to the set

[ N U := {z ∈ R : supp(z) = T }. T ⊂[N], |T |=k

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 19 of 44 Structures in Sparsity II - Union of Subspaces From Sparsity to Subspaces

If x ∈ R is k-sparse, it belongs to the set

[ N U := {z ∈ R : supp(z) = T }. T ⊂[N], |T |=k

Definition (k-sparse RIP) m×N A matrix A ∈ R has the k-sparse Restricted Isometry Property N with constant δ if, for all k-sparse x ∈ R ,

2 2 2 (1 − δ)kxk2 ≤ kAxk2 ≤ (1 + δ)kxk2.

m×N N If A ∈ R has i.i.d. Gaussian entries and m ≈ O(k log( k )), it satisfies the k-sparse RIP with high probability.

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 19 of 44 Definition (U-RIP) m×N A matrix A ∈ R has the U Restricted Isometry Property with constant δ if, for all x ∈ U,

2 2 2 (1 − δ)kxk2 ≤ kAxk2 ≤ (1 + δ)kxk2.

Structures in Sparsity II - Union of Subspaces From Sparsity to Subspaces

Let x ∈ R belong to the set [ U := UT T ∈T

which is decomposed in |T | subspaces of dimension D.

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 20 of 44 Structures in Sparsity II - Union of Subspaces From Sparsity to Subspaces

Let x ∈ R belong to the set [ U := UT T ∈T

which is decomposed in |T | subspaces of dimension D. Definition (U-RIP) m×N A matrix A ∈ R has the U Restricted Isometry Property with constant δ if, for all x ∈ U,

2 2 2 (1 − δ)kxk2 ≤ kAxk2 ≤ (1 + δ)kxk2.

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 20 of 44 Structures in Sparsity II - Union of Subspaces From Sparsity to Subspaces

Let x ∈ R belong to the set [ U := UT T ∈T

which is decomposed in |T | subspaces of dimension D. Theorem (Blumensath, Davies [5]) m×N Let A ∈ R be a matrix with i.i.d. subgaussian entries and t > 0. If 2  12  m ≥ log(2|T |) + D log + t , cδ δ

then A has the U-RIP with probability at least 1 − e−t .

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 21 of 44 Structures in Sparsity II - Union of Subspaces From Sparsity to Subspaces

Let x ∈ R belong to the set [ U := UT T ∈T

which is decomposed in |T | subspaces of dimension D. Theorem (Blumensath, Davies [5]) m×N Let A ∈ R be a matrix with i.i.d. subgaussian entries and t > 0. If

0  m ≥ cδ log(2|T |) + Dcδ + ct ,

then A has the U-RIP with probability at least 1 − e−t .

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 22 of 44 Structures in Sparsity II - Union of Subspaces From Sparsity to Subspaces

Let x ∈ R belong to the set [ U := UT T ∈T

which is decomposed in |T | subspaces of dimension D. Theorem (Blumensath, Davies [5]) m×N Let A ∈ R be a matrix with i.i.d. subgaussian entries and t > 0. If   N   N  m ≥ c log 2 + kc 0 + c = O k log , δ k δ t k

then A has the U-RIP with probability at least 1 − e−t .

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 23 of 44 To do so, we define the the blocks {Bj : 1 ≤ j ≤ nB} as a set of linear subspaces of dimension dj and each UT as M UT = Bj , j∈T

where T is element of T := {T ⊂ [nB]: |T | = k}.

Structures in Sparsity II - Union of Subspaces Sparse Sums of Subspaces

We can use the generalized model of subspaces to model structures like block sparsity, i.e. signals of the form

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 24 of 44 Structures in Sparsity II - Union of Subspaces Sparse Sums of Subspaces

We can use the generalized model of subspaces to model structures like block sparsity, i.e. signals of the form

To do so, we define the the blocks {Bj : 1 ≤ j ≤ nB} as a set of linear subspaces of dimension dj and each UT as M UT = Bj , j∈T

where T is element of T := {T ⊂ [nB]: |T | = k}.

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 24 of 44 Structures in Sparsity II - Union of Subspaces Sparse Sums of Subspaces

Assuming, for simplicity, all blocks Bj share dimension d we will have the U-RIP for Gaussian measurements for   N/d    N   m ≥ c log 2 + (dk)c0 + c ≈O k log + dk , δ k δ t dk

and not only for

  N  m ≈ O dk log dk

as conventional CS requires without prior knowledge on sparsity structure.

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 25 of 44 Structures in Sparsity II - Union of Subspaces Exploring Subspace Structure in Recovery We will illustrate modifications on recovery algorithms to exploit the structural knowledge exemplarily with CoSaMP. Define Lk (z) := {index set of k largest absolute entries of z}. CoSaMP INPUT: matrix A, measurement vector y, sparsity level k. INIT: x0 = 0. ITERATION: until stopping criterion is met

T  Jn+1 ← L2k A (y − Axn) , Tn+1 ← supp(xn) ∪ Jn+1,

un+1 ← arg minz∈RN {ky − Azk2, supp(z) ⊂ Tn+1} .

xn+1 ← Hk un+1 (= PLk (un+1)un+1)

OUTPUT: the k-sparse approximationx ˆ := xn˜

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 26 of 44 Structures in Sparsity II - Union of Subspaces Exploring Subspace Structure in Recovery We will illustrate modifications on recovery algorithms to exploit the structural knowledge exemplarily with CoSaMP.

. CoSaMP INPUT: matrix A, measurement vector y, sparsity level k. INIT: x0 = 0. ITERATION: until stopping criterion is met

T  Jn+1 ← supp H2k A (y − Axn) , Tn+1 ← supp(xn) ∪ Jn+1,

un+1 ← arg minz∈RN {ky − Azk2, supp(z) ⊂ Tn+1} . xn+1 ← Hk un+1

OUTPUT: the k-sparse approximationx ˆ := xn˜

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 27 of 44 Structures in Sparsity II - Union of Subspaces Exploring Subspace Structure in Recovery We will illustrate modifications on recovery algorithms to exploit the structural knowledge exemplarily with CoSaMP.

Model-Based CoSaMP INPUT: matrix A, measurement vector y. INIT: x0 = 0. ITERATION: until stopping criterion is met

T  Jn+1 ← supp M2 A (y − Axn) , Tn+1 ← supp(xn) ∪ Jn+1,

un+1 ← arg minz∈RN {ky − Azk2, supp(z) ⊂ Tn+1} . xn+1 ← M1 (un+1)

OUTPUT: the model conform approximationx ˆ := xn˜

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 28 of 44 ` This can be generalized by defining the `-Minkowski sum MU for the set U by

 `  `  X (j) (j)  MU := x = x , where x ∈ U, for 1 ≤ j ≤ `  j=1 

and the operator M` by

M`(x) := arg min kx − zk2. ` z∈MU

Structures in Sparsity II - Union of Subspaces

What is M`?

In case of classical sparsity k we know M` = H`k , i.e. M` is a projection onto the set of all `k-sparse vectors.

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 29 of 44 and the operator M` by

M`(x) := arg min kx − zk2. ` z∈MU

Structures in Sparsity II - Union of Subspaces

What is M`?

In case of classical sparsity k we know M` = H`k , i.e. M` is a projection onto the set of all `k-sparse vectors. ` This can be generalized by defining the `-Minkowski sum MU for the set U by

 `  `  X (j) (j)  MU := x = x , where x ∈ U, for 1 ≤ j ≤ `  j=1 

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 29 of 44 Structures in Sparsity II - Union of Subspaces

What is M`?

In case of classical sparsity k we know M` = H`k , i.e. M` is a projection onto the set of all `k-sparse vectors. ` This can be generalized by defining the `-Minkowski sum MU for the set U by

 `  `  X (j) (j)  MU := x = x , where x ∈ U, for 1 ≤ j ≤ `  j=1 

and the operator M` by

M`(x) := arg min kx − zk2. ` z∈MU

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 29 of 44 ` The `-Minkowski sum MU contains all signals with up to `k active blocks. Therefore, M` keeps the in `2-norm `k largest blocks and sets the rest to zero.

Structures in Sparsity II - Union of Subspaces

What is M`?

Recall, in case of k-block sparsity with blocks Bj [ [ M U = UT = Bj

T ⊂[nB], T ⊂[nB], j∈T |T |=k |T |=k

which contains all signals that have up to k active blocks.

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 30 of 44 Structures in Sparsity II - Union of Subspaces

What is M`?

Recall, in case of k-block sparsity with blocks Bj [ [ M U = UT = Bj

T ⊂[nB], T ⊂[nB], j∈T |T |=k |T |=k

which contains all signals that have up to k active blocks. ` The `-Minkowski sum MU contains all signals with up to `k active blocks. Therefore, M` keeps the in `2-norm `k largest blocks and sets the rest to zero.

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 30 of 44 Structures in Sparsity II - Union of Subspaces Exploring Subspace Structure in Recovery

Model-Based CoSaMP INPUT: matrix A, measurement vector y. INIT: x0 = 0. ITERATION: until stopping criterion is met

T  Jn+1 ← M2 A (y − Axn) , Tn+1 ← supp(xn) ∪ Jn+1,

un+1 ← arg minz∈RN {ky − Azk2, supp(z) ⊂ Tn+1} . xn+1 ← M1 (un+1)

OUTPUT: the model conform approximationx ˆ := xn˜

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 31 of 44 Structures in Sparsity II - Union of Subspaces Theoretical Guarantees of Model-Based Recovery

Theorem (Baraniuk, Cevher, Duarte, Hegde [4]) Let x ∈ U and y = Ax + n be noisy CS measurements. If A satisfies 4 the MU -RIP with δ < 0.1, then the i-th iterate xi of model-based CoSaMP fulfills

−i kx − xi k2 ≤ 2 kxk2 + 15knk2.

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 32 of 44 Structures in Sparsity II - Union of Subspaces Model-Based CoSaMP for Block Sparse Signal

(a) Example block-compressible signal of length N = 4096 with k = 6 active blocks of size d = 64. Recovery from m = 960 measurements using (b) CoSaMP and (c) model-based CoSaMP ; Source: [4]

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 33 of 44 Doing so the required number of measurements for a suitable RIP can be reduced from

0  m ≥ cδ log(2|T |) + kcδ + ct ,

Structures in Sparsity II - Union of Subspaces Individually Structured Sparse Supports

In certain frameworks one might be interested in using more involved sparsity information which cannot be expressed in terms of block sparsity. Then it may be useful to restrict the signal space

[ [ N U = UT := {z ∈ R : supp(z) = T } T ∈T T ⊂[N], |T |=k

to arbitrary subsets S ⊂ T of subspaces.

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 34 of 44 Structures in Sparsity II - Union of Subspaces Individually Structured Sparse Supports

In certain frameworks one might be interested in using more involved sparsity information which cannot be expressed in terms of block sparsity. Then it may be useful to restrict the signal space

[ [ N U = UT := {z ∈ R : supp(z) = T } T ∈T T ⊂[N], |T |=k

to arbitrary subsets S ⊂ T of subspaces. Doing so the required number of measurements for a suitable RIP can be reduced from

0  m ≥ cδ log(2|T |) + kcδ + ct ,

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 34 of 44 Structures in Sparsity II - Union of Subspaces Individually Structured Sparse Supports

In certain frameworks one might be interested in using more involved sparsity information which cannot be expressed in terms of block sparsity. Then it may be useful to restrict the signal space

[ [ N U = UT := {z ∈ R : supp(z) = T } T ∈T T ⊂[N], |T |=k

to arbitrary subsets S ⊂ T of subspaces. Doing so the required number of measurements for a suitable RIP can be reduced to

0  m ≥ cδ log(2|S|) + kcδ + ct ,

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 35 of 44 Structures in Sparsity II - Union of Subspaces Tree-sparse Signals

One example is given by sparse decompositions. In this case the non-zero coefficients are embedded in a tree-like structure.

Binary wavelet tree for one-dimensional signal. Large coefficients caused by discontinuities form tree-like structure; Source: [4]

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 36 of 44 Structures in Sparsity II - Union of Subspaces Tree-sparse Signals

It can be shown measurements of order O(k) are sufficient to guarantee a ST -RIP where ST is the subset of k-sparse coefficient vectors obeying a tree structure. Moreover, there exist efficient solvers for

arg min kx − zk2, z∈ST

e.g. the condensing sort and select algorithm (CSSA). ⇒ Model-based CoSaMP efficiently solves with reduced number of measurements

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 37 of 44 Structures in Sparsity II - Union of Subspaces Tree-sparse Signals

Performance of CoSaMP vs. wavelet tree-based recovery on a class of piecewise-cubic signals as a function of M/K; Source: [4]

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 38 of 44 Conclusion Overview

Classical Compressed Sensing

Structures in Sparsity I - Joint Sparsity

Structures in Sparsity II - Union of Subspaces

Conclusion

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 39 of 44 • Joint sparsity is depending heavily on the independence of signals • The generalized subspace model admits different ways of including prior knowledge into recovery • Structure in sparsity can only improve on the log-factors as the intrinsic information dimension is unchanged

Conclusion Conclusion

• Simple modifications of well-known greedy pursuits can be used to exploit structures in sparsity

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 40 of 44 • The generalized subspace model admits different ways of including prior knowledge into recovery • Structure in sparsity can only improve on the log-factors as the intrinsic information dimension is unchanged

Conclusion Conclusion

• Simple modifications of well-known greedy pursuits can be used to exploit structures in sparsity • Joint sparsity is depending heavily on the independence of signals

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 40 of 44 • Structure in sparsity can only improve on the log-factors as the intrinsic information dimension is unchanged

Conclusion Conclusion

• Simple modifications of well-known greedy pursuits can be used to exploit structures in sparsity • Joint sparsity is depending heavily on the independence of signals • The generalized subspace model admits different ways of including prior knowledge into recovery

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 40 of 44 Conclusion Conclusion

• Simple modifications of well-known greedy pursuits can be used to exploit structures in sparsity • Joint sparsity is depending heavily on the independence of signals • The generalized subspace model admits different ways of including prior knowledge into recovery • Structure in sparsity can only improve on the log-factors as the intrinsic information dimension is unchanged

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 40 of 44 Conclusion References

[1] M. F. Duarte, Y. C. Eldar, ’Structured Compressed Sensing: From Theory to Applications’, IEEE, 2011 [2] M. E. Davies, Y. C. Eldar, ’Rank Awareness in Joint Sparse Recovery’, IEEE, 2012 [3] T. Blumensath, M. E. Davies, ’Sampling Theorems for Signals From the Union of Finite-Dimensional Linear Subspaces’, IEEE, 2009 [4] R. G. Baraniuk, V. Cevher, M. F. Duarte, C. Hegde, ’Model-Based Compressive Sensing’, IEEE, 2010 [5] S. Foucart, H. Rauhut, ’A Mathematical Introduction to Compressive Sensing’, Springer, 2013

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 41 of 44 How could a convex recovery approach be designed?

Conclusion Convex Approach

Recall the block sparsity model

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 42 of 44 Conclusion Convex Approach

Recall the block sparsity model

How could a convex recovery approach be designed?

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 42 of 44 Conclusion Convex Approach

Define a norm on

N R = B1 ⊕ · · · ⊕ BnB

by

n XB kxkB := kxBi k2, i=1

where xBi is x with all entries not in Bi set to zero.

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 43 of 44 Conclusion Convex Approach

Define a norm on

N R = B1 ⊕ · · · ⊕ BnB

by

n XB kxkB := kxBi k2, i=1

where xBi is x with all entries not in Bi set to zero.

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 43 of 44 If rank(X ) = R  min(n1, n2), then

m & CR max(n1, n2) measurements already guarantee recovery via convex optimization, e.g. nuclear norm minimization.

Conclusion From CS to Low-Rank Matrix Recovery

Instead of some unknown vector x, one can recover a low-rank matrix n ×n X ∈ R 1 2 from few linear measurements

y = A(X )

n ×n m where A : R 1 2 → R is now a (random) tensor and m  n1n2.

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 44 of 44 Conclusion From CS to Low-Rank Matrix Recovery

Instead of some unknown vector x, one can recover a low-rank matrix n ×n X ∈ R 1 2 from few linear measurements

y = A(X )

n ×n m where A : R 1 2 → R is now a (random) tensor and m  n1n2. If rank(X ) = R  min(n1, n2), then

m & CR max(n1, n2) measurements already guarantee recovery via convex optimization, e.g. nuclear norm minimization.

Johannes Maly Structured Compressed Sensing - Using Patterns in Sparsity 44 of 44