<<

arXiv:1702.06781v2 [cs.IT] 28 Feb 2020 ℓ ihsrcue priy e uscin11 nteohrhand, other the On 1.1. Subsection see sparsity, o structured numbers with Gelfand of study The details. (approximately) further for for 2.2 models Subsection as processing signal orsodn ntbl is ball unit corresponding en htw r ofotdwt aaee oseltoswher constellations parameter with confronted are we that means ihsalmxdsmoothness. mixed small with (2) eal nti oncincnb on nPooiin22adtediscu the and 2.2 Proposition ℓ in found be can connection this on details eptepai ncssweeete 0 either where cases on emphasis put We inlpoesn,Gladnmeso mednsntrlyappea naturally element embeddings an of of recovery numbers optimal Gelfand processing, signal uniytecmateso h embedding the of compactness the quantify emti on fve,teGladnmeso nebdigbetwe embedding an of numbers Gelfand the theo Y view, approximation of in point concepts geometric fundamental are numbers Gelfand in sudrml supin qiaett the to equivalent assumptions mild under is otnosmaueetmpand map measurement continuous (1) hr h nmmi ae vralmp fteform the of maps all over taken is infimum the where ore rwvltcecet of coefficients wavelet or Fourier 2 bd p b ie by given , ( e od n phrases. and words Key h i fti ae st nesadtebhvo fteGladnu Gelfand the of behavior the understand to is paper this of aim The Y hc ecnuti hsppr ilsgnrlproac bounds perfomance general yields paper, this in conduct we which , ℓ q d o instance, For . EFN UBR EAE OSRCUE PRIYADBESOV AND SPARSITY STRUCTURED TO RELATED NUMBERS GELFAND -pcs hc r qipdwt h ie (quasi-) mixed the with equipped are which )-spaces, ubro neetn aaee oseltos u e ma new Our of constellations. embeddings parameter interesting of number qai)omembeddings (quasi-)norm prei-eesvcos epciey nadto,w a we addition, In respectively. vectors, sparse-in-levels Abstract. and/or eaira nteuiait iuto.I h eann ca remaining cases the particular In some bound. situation. in univariate univariate the that the out in as turns behavior It multivar of smoothness. numbers Gelfand mixed the for estimates two-sided new c m I : (Id q PC MEDNSWT ML IE SMOOTHNESS MIXED SMALL WITH EMBEDDINGS X ≤ ecnie h rbe fdtriigteaypoi order asymptotic the determining of problem the consider We .Teecsstr u ob eae osrcue sparsity. structured to related be to out turn cases These 1. → f ℓ 1 b Y a eafnto nacranfnto pc n h iersmlsma samples linear the and space function certain a in function a be may ( ℓ efn numbers, Gelfand inf = ) 2 d and ) non-convex ℓ ℓ f 2 b n p b f ( ( ∈ ℓ x ℓ 1 d rsml ucinvle.The values. function simple or q d ∈ E ϕ into ) X JEDDRSNADTN ULLRICH TINO AND DIRKSEN SJOERD ) k B sup m m x ֒ → X rmfwarbitrary few from k ( ∩ The . : ,Y X, ℓ ℓ ℓ p b M R r b ℓ p ( 2 b ( ℓ ( m ℓ ( k q d ℓ u d ℓ q p < ) x 2 d =inf := ) ie that given ) -pcs opesdsnig lc priy sparsity-i sparsity, block sensing, compressed )-spaces, → ml piaiyasrin o h eoeyo block-spa of recovery the for assertions optimality imply ) k := 1. m X ֒ Y ℓ p b t iia os-aercvr error recovery worst-case minimal -th Y  ≤ Introduction ( S : ℓ → X sa rirr ntncsaiylna)rcvr a.More map. recovery linear) necessarily (not arbitrary an is m i block-sparse 2 d =1 r0 or 1 b M ֒ -and )- k Y f  sup k 1 rmteprpcieo prxmto hoyand theory approximation of perspective the From . → X p X j S pyorsapetmtsfor estimates sharp our pply =1 ≤ d ≤ m linear X q < 1 ℓ r | k = x 2 b aeBsvsaeebdig nrgmso small of regimes in embeddings space Besov iate e hydffra otb o o atrfrom factor log log a by most at differ they ses f lsdsbpc ihcodim( with subspace closed and ( cigbud o h efn ubr fthe of numbers Gelfand the for bounds tching ij ℓ ϕ ≤ − | p d q and m ape,weetercvr ro smeasured is error recovery the where samples, hs siae hwtesm asymptotic same the show estimates these -al ih0 with )-balls  q S rbt 0 both or 1 m p/q ◦ m ≤ h mednsof embeddings the f 1t efn number Gelfand +1-th e N ( sparse-in-levels u yadBnc pc emty rma From geometry. space Banach and ry ℓ  f hncnieigtepolmo the of problem the considering when r ihepai ncsswith cases on emphasis with , p b k · k m 1 ) ( /p k Here . ℓ nto(us-nre spaces (quasi-)normed two en Y fteGladnmeso mixed- of numbers Gelfand the of q d . so nRmr . below. 2.3 Remark in ssion -al perntrlyi wavelet in naturally appear )-balls ℓ , o h pia eoeyo signals of recovery optimal the for p b eoti hr onsi a in bounds sharp obtain We ( ℓ q d ,q p, < ) br febdig between embeddings of mbers N p < sol us-omadthe and quasi-norm a only is m ℓ p b ( : ℓ ≤ ≤ q d X inl,rsetvl,see respectively, signals, -pcst obtain to )-spaces perntrlyin naturally appear 1 ,wihi particular in which 1, → ℓ p b M ( R c ℓ -ees eo spaces Besov n-levels, 2 d m ) m and ) +1 m < salna and linear a is s and rse I : (Id p oss of consist y ≤ o ℓ , 2 b 1 ( X ℓ X p d → into ) and Y ) 2 SJOERD DIRKSEN AND TINO ULLRICH

characterizations of multivariate Besov spaces with dominating mixed smoothness. The study of embeddings b b between quasi-normed ℓp(ℓq)-spaces in this paper leads to new asymptotic bounds for Gelfand numbers of embeddings between the aforementioned Besov spaces in regimes of small smoothness, see Subsection 1.2. Interestingly, in some particular cases these estimates show the same asymptotic behavior as in the univariate situation. In the remaining cases they differ at most by a log log factor from the univariate bound. 1.1. Gelfand numbers and sparse recovery. We obtain matching bounds for the Gelfand numbers c (id : ℓb (ℓd) ℓb(ℓd )) m p q → r u for a number of interesting parameter ranges in this paper. The results in full detail can be found in Section 6. At this point, we wish to discuss only the most illustrative cases and highlight some interesting effects. In particular, we prove for 1 m bd and 0

Then, there exist constants c1,c2 > 0 depending only on C so that if s>c2, then m c (s log(eb/s)+ sd). ≥ 1 On the other hand, if for all x Rbd, ∈ inner σ (x) b d t ℓ2(ℓ1 ) (8) x ∆(Ax) ℓb (ℓd) D , k − k 2 2 ≤ √t

then there exist constants c1,c2 > 0 depending only on D such that m c bt log(ed/t), ≥ 1 provided that t>c2. GELFAND NUMBERS, STRUCTURED SPARSITY, BESOV SPACE EMBEDDINGS 3

To discuss the consequences of this result for structured sparse recovery, let us recall the following known results. We refer to Section 2.2 for a more extensive discussion of the literature. Let B be an m bd Gaussian matrix, i.e., a matrix with i.i.d. standard Gaussian entries and set A = (1/√m)B. For any 0 <× p,q , let ≤ ∞ ∆ℓ (ℓ )(Ax) = argmin Rbd z b d p q z∈ : Az=Axk kℓp(ℓq ) be the reconstruction via ℓp(ℓq)-minimization. It is known that if the number of measurements m satisfies

m & s log(eb/s)+ sd, then with high probability the pair (A, ∆ℓ1(ℓ2)) satisfies the stable reconstruction property in (7) (see [19] and also the discussion in [3, Section 2.1]). Moreover, this number of measurements cannot be reduced: [3, Theorem 2.4] shows that if for a given A Rm×bd one can recover every s-outer ∈ sparse vector x exactly from y = Ax via ∆ℓ1(ℓ2) then necessarily m & s log(eb/s)+ sd. However, one could still hope to further reduce the required number of measurements for outer sparse recovery by picking a better pair (A, ∆). Corollary 1.2 shows that this is not possible if we want the same stable reconstruction

guarantee as in (7). In this sense, the pair (A, ∆ℓ1(ℓ2)) with A = (1/√m)B is optimal.

It is worthwhile to note that the pair (A, ∆ℓ1(ℓ1)), A = (1/√m)B, with high probability satisfies the stable reconstruction guarantee ([10, Theorem 1.2 and Section 1.3], see also [11, 14] and [23, Theorem 9.13])

σk(x) b d ℓ1 (ℓ1 ) b d (9) x ∆ℓ1(ℓ1)(Ax) ℓ (ℓ ) D , k − k 2 2 ≤ √k whenever m & k log(ebd/k), where

σk(x)ℓb (ℓd) = inf x z ℓb(ℓd) : z is k -sparse . 1 1 {k − k 1 1 } As a consequence (by taking k = sd), (7) is satisfied under the condition m & sd log(eb/s). In conclusion, if we are only interested in stability of our reconstruction method with respect to outer sparsity (instead of sparsity) defects, then we can achieve this with fewer measurements (in a best worst-case scenario) by using

∆ℓ1(ℓ2) as a reconstruction map instead of ∆ℓ1(ℓ1). In the case of reconstruction of inner-sparse vectors, one might expect that a similar phenomenon occurs: if we are only interested in stability with respect to inner sparsity (instead of sparsity) defects, then we might be able to reconstruct from fewer measurements by using a reconstruction map that is especially suited for inner sparse recovery (a potential candidate would be

∆ℓ2(ℓ1), in analogy with the outer sparse case). Somewhat surprisingly, Corollary 1.2 shows that this is not the case. By taking k = bt in (9) and using that every t-inner sparse vector is bt-sparse (but not vice-versa),

one deduces that the pair (A, ∆ℓ1(ℓ1)), A = (1/√m)B, with high probability satisfies (8) if m & bt log(ed/t). This is optimal by Corollary 1.2. 1.2. Besov space embeddings with small mixed smoothness. As a second application of our work, we are interested in proving upper and lower bounds for the Gelfand numbers of compact embeddings between Besov spaces (10) Id : Sr0 B([0, 1]d) Sr1 B([0, 1]d) , p0,q0 → p1,q1 where 0 < p0 p1 , 0 < q0, q1 and r0 r1 > 1/p0 1/p1. By using a well-known hyperbolic wavelet discretization≤ ≤ machinery, ∞ see for≤ ∞ instance [63],− we can tran−sfer the problem to bounding the Gelfand numbers of id : sr0 b([0, 1]d) sr1 b([0, 1]d), p0,q0 → p1,q1 where (see (46) below for details) q/p 1/q r (r−1/p)qk¯jk1 p ¯ ¯ C r ¯ sp,qb := λ = λ¯j,k ¯j,k : λ sp,qb := 2 λ¯j,k < { } ⊂ k k d | | ∞ ¯j∈N k¯∈A¯ n h X0  Xj  i o j j d r and A¯ = ([0, 2 1 1] [0, 2 d 1]) Z . We immediately observe that s b has an ℓ (ℓ )-structure. j − ×···× − ∩ p,q q p After a suitable block decomposition we use the subadditivity of Gelfand numbers (see (S2) below) to reduce the problem to finite-dimensional blocks. This strategy has for instance been pursued in the recent paper [46], where the case p0 = q0 > 1 has been treated. In our work, we are interested in the case min p0, q0 1, max p1, q1 2, which is connected to Theorem 6.1 below. As it turns out, there is a strong{ qualitative} ≤ difference{ } in ≤ the behavior of the Gelfand numbers depending on whether one is in the “large mixed smoothness” (r0 r1 > 1/q0 1/q1) or “small mixed smoothness” (r0 r1 1/q0 1/q1) regime. In the large smoothness− regime, sharp− bounds for the Gelfand numbers can be readily− ≤ derived:− using 4 SJOERD DIRKSEN AND TINO ULLRICH almost literally the arguments in [63, Thms. 3.18, 3.19] together with the well-known sharp estimates for the n n Gelfand numbers of the (non-mixed) embeddings id : ℓu ℓv where 0