arXiv:0901.2912v1 [cs.IT] 19 Jan 2009 opesdsnigmdli hrfr pca aewhere or case standard special zero The a being vector. therefore unknown of is the model probability of sensing entry a compressed each assigns to that nonzero signal sparse patter sparsity its algorithm. sensing (e.g., compressed signal pr the sparse feeds that the problem estimation on larger information a ma of Bayesian loop sensing inner (on compressed the considers cases, be [6] (which other and [5] In sensing). in estimation) compressed found recen and or be Some detection can detection setting. compressed lines a these statistical (which along is some problem work address) in recovery problem signal to estimation the attempts cases sensing many m compressed in In the basis. fact, particular than av In a is other information in prior sparse signal additional however, sufficiently applications, unknown is the it that on fact information prior no syste a [2]. equations from linear vector underdetermined sparse of sufficiently a recover efficiently lopoiesmltost eosrt h datgso t We of weights. advantages the optimal demonstrate conventional the over to compute method simulations to provide along us non-zero also dimen allows matrix problem being This the measurement as of increases. probability Gaussian overwhelming probabilities with random the signals iid sets, weighted an two with measurements that the of t number of so between the size weights, relationship the (the the angl parameters Grassman explicitly a system compute using We performance its approach. analyze and algorithm ftesga tef nfc,temjrbekhog nthis in that breakthrough major demonstration the the dimension fact, been ambient In recovered has the itself. be area than signal can measurements the basis) of known fewer zero a far many The in with with [14]. (signals elements signals costly negligible sparse be or that can is premise measurements whole where compressio by scenarios (followed as sampling for proposed Nyquist recently to been alternative has an that compression and sampling Weighted etrfl notost,ec ihadfeetprobability different a with weighted each a sets, propose unkno We two the nonzero. of being into entries the fall where model nonzer vector a being on signal underdeter focus unknown of we the particular, of system about entry information a each prior of from have probability we signal when equations sparse linear a mined recovering of lem nti ae ewl osdrapriua oe o the for model particular a consider will we paper this In assumes sensing compressed to approach conventional The opesdsnigi neegn ehiu fjoint of technique emerging an is sensing Compressed Abstract .Ai Khajehnejad Amin M. [email protected] aaeaC,USA CA, Pasadena nti ae esuytecmrse esn prob- sensing compressed the study we paper this In — atc EE Caltech ℓ 1 iiiainrcvr lotalsc sparse such all almost recovers minimization .I I. ℓ NTRODUCTION ℓ 1 1 optimization. iiiainfrSas eoeywith Recovery Sparse for Minimization [email protected] aaeaC,USA CA, Pasadena ℓ 1 atc EE Caltech iiiainrecovery minimization ℓ ey Xu Weiyu 1 ro Information Prior iiiaincan minimization ailable. )to n) .In o. sion any the wn ior he he n) m of y e ) - t , [email protected] hr the where weighted a i paper. so short of and this categories of tedious two scope increasingly the than becomes beyond more exte analysis to with the possible models entries, principle to in very techniques is a our allow it to While enough analysis. simple regard thorough being features salient while the information, of many prior capture to enough rich is ilwt ihpoaiiyb around be probability high with will st opt xlctyterltosi ewe the between relationship the explicitly compute to is n h nre fteukonsga alit w categories: two into cardinality fall (with signal set first unknown whe case the the the in on of focus will entries we paper the this in generality, full in rath blocks certain in [7]. nonzero others be in to than likely more is signal the etrtepoaiiiswl l be all will probabilities the vector ute ocn hmt ezero). be (th less to is them nonzero forcing being of further probability whose entries those ( hs rbblte r l qa freape o a for example, (for equal all are probabilities these vial,sc si aua mgs eia mgn,o in or often be imaging, is may signal medical the images, information where natural microarrays prior in DNA such as there where such above, available, situations mentioned As many vector). unknown are the of entries of ozr is nonzero w htteukonsga a ercvrdwt overwhelming with recovered as be can probability signal unknown the that nls ooti hr hehlsfrcmrse sensing. compressed the for use thresholds than rather sharp However, Grassma obtain (e.g., [3] to [1], Tanner technique angles) and geometry Donoho by high-dimensional introduced the first ensemble. uses Gaussian iid analysis an The from drawn matrices measurement ta 8.Temi ifrnei htteei opirinform prior no is there that is difference main the step The each [8]. al et noprtn diinlfcosit opesdsensing for compressed resulting framework into space general The factors null a [13]. additional the is [4], incorporating use approach Hassibi to manifold convenient and Grassmanian more Xu it of find characterization we [3], [1], in bandi h atmnmzto step. minimization last the in obtained .Sla Avestimehr Salman A. i aaeaC,USA CA, Pasadena − i 1 1 = h otiuin fteppraetefloig epropose We following. the are paper the of contributions The hl ti osbe(letcmesm)t td hsmodel this study to cumbersome) (albeit possible is it While the , oehtrltdmto htue weighted uses that method related somewhat A n atc CMI Caltech 1 , hspoaiiyis probability this ) 2 .Cery n ol att iealre egtto weight larger a give to want would one Clearly, ). n n i ℓ ℓ , P 1 1 ℓ i 1 om fec e r ie ifrn weights different given are set each of norms piiaini ewihe sn h siae ftesign the of estimates the using re-weighted is optimization 1 n ntescn e wt cardinality (with set second the in and , 1 = n iiiainapoc o prerecovery sparse for approach minimization ∞ → , 2 n h ubro esrmnsso measurements of number the and P tes-aldwa hehl)for threshold) weak so-called (the 2 Cery nti aetesparsity the case this in (Clearly, . neighborliness [email protected] aaeaC,USA CA, Pasadena n 1 1 n k h rbblt fbeing of probability the ) aa Hassibi Babak n h eodcontribution second The where , atc EE Caltech 1 P 1 ℓ + 1 n piiaini Candes is optimization lc sparse block 2 n P odto used condition 2 stenumber the is )Ti model This .) to n at and ation k -sparse p i n i.e., , the , 2 in : ing w nd us = re er al n s s i [4] it was used to incorporate measurement noise; here it III. COMPUTATION OF THE WEAK THRESHOLD is used to incorporate prior information and weighted ℓ 1 Because of the partial symmetry of the sparsity of the signal optimization. Our analytic results allow us to compute the we know that the optimum weights should take only two optimal weights for any p , p , n , n . We also provide 1 2 1 2 positive values W and W . In other words2 simulation results to show the advantages of the weighted 1 2 method over standard ℓ1 minimization. W1 if i ∈ K1 ∀i ∈{1, 2, ··· ,n} wi =  W2 if i ∈ K2 II. MODEL Let x be a random sparse signal generated based on the The signal is represented by a n × 1 vector x = non-uniformly sparse model of section II and be supported on T (x1, x2, ..., xn) of real valued numbers, and is non-uniformly the set K. K is called ǫ-typical if ||K ∩ K1|− n1P1| ≤ ǫn sparse with sparsity factor P1 over the (index) set K1 ⊂ and ||K ∩ K2|− n2P2| ≤ ǫn. Let E be the event that x is {1, 2, ..n} and sparsity factor P2 over the set K2 = recovered by (3). Then: {1, 2, ..., n}\ K1. By this, we mean that if i ∈ K1, xi is a P[Ec] = P [Ec|K is ǫ-typical]P [K is ǫ-typical] nonzero element with probability P1 and zero with probability c 1−P1. However, if i ∈ K2 the probability of xi being nonzero + P [E |K not ǫ-typical]P [K not ǫ-typical] is P2. We assume that |K1| = n1 and |K2| = n2 = n − n1. m For any fixed ǫ > 0 P [K not ǫ-typical] will exponentially The measurement matrix A is a m × n ( n = δ < 1) matrix with i.i.d N (0, 1) entries. The observation vector is denoted approach zero as n grows according to the law of large by y and obeys the following: numbers. So, to bound the probability of failed recovery we may assume that K is ǫ-typical for any small enough ǫ. y = Ax (1) Therefore we just consider the case |K| = k = n1P1 + n2P2. Similar to the null-space condition of [13], we present a As mentioned in Section I, ℓ1-minimization can recover a necessary and sufficient condition for x to be the solution vector x with k = µn non-zeros, provided µ is less than a to (3). It is as follows: known function of δ. ℓ1 minimization has the following form: ∀Z ∈ N (A) wi|Zi|≤ wi|Zi| iXK X min kxk1 (2) ∈ i K Ax=y ∈ Where N (A) denotes the right nullspace of A. We can (2) is a linear programming and can be solved polynomially c upper bound P (E ) with PK, which is the probability that 3 fast (O(n )). However, it fails to encapsulate additional prior a vector x of a specific sign− pattern (say non-positive) and information of the signal nature, might there be any such supported on the specific set K is not recovered correctly information. One might simply think of modifying (2) to a by (3) (A difference between this upper bound and the one weighted ℓ1 minimization as follows: n k in [4] is that here there is no k 2 factor, and that is because we have fixed the support set K and the sign pattern of x). n  Exactly as done in [4], by restricting x to the cross-polytope min kxkw1 = min wi|xi| (3) n 3 Ax=y Ax=y {x ∈ R | kxkw1 = 1} , and noting that x is on a (k − 1)- Xi=1 dimensional face F of the skewed cross-polytope SP = {y ∈ n The index w is an indication of the n × 1 positive weight R | kykw1 ≤ 1}, PK, is essentially the probability that vector. Now the question is what is the optimal set of weights, a uniformly chosen (n −−m)-dimensional subspace Ψ shifted and can one improve the recovery threshold using the weighted by the point x, namely (Ψ + x), intersects SP nontrivially ℓ1 minimization of (3) with those weights rather than (2)? We at some other point besides x. PK, is then interpreted as have to be more clear with the objective at this point and what the complementary Grassmann angle− [9] for the face F with we mean by extending the recovery threshold. First of all note respect to the polytope SP under the Grassmann manifold that the vectors generated based on the model described above Gr(n−m)(n). Building on the works by L.A.Santal¨o[11] and can have any arbitrary number of nonzeros. However, their P.McMullen [12] etc. in high dimensional integral geometry support size is typically (with probability arbitrary close to and convex polytopes, the complementary Grassmann angle one) around n1P1 + n2P2). Therefore, there is no such notion for the (k −1)-dimensional face F can be explicitly expressed of strong threshold as in the case of [1]. We are asking the as the sum of products of internal angles and external angles question of for what P1 and P2 signals generated based on [10]: this model can be recovered with overwhelming probability as n → ∞. Moreover we are wondering if by adjusting wi’s 2 × β(F, G)γ(G, SP), (4) according to P and P can one extend the typical sparsity 1 2 Xs 0 G mX+1+2s(SP) n1P1+n2P2 ≥ ∈ℑ to dimension ratio ( n ) for which reconstruction is 2 successful with high probability. This is the topic of next Also we may assume WLG that W1 = 1 section. 3This is because the restricted polytope totally surrounds the origin in Rn where s is any nonnegative integer, G is any (m +1+2s)- First we can prove the following lemma: dimensional face of the skewed crosspolytope (ℑm+1+2s(SP) Lemma 1: Let ConF ⊥,G be the positive cone of all the is the set of all such faces), β(·, ·) stands for the internal angle vectors x ∈ Rn that take the form: and γ(·, ·) stands for the external angle. The internal angles k l and external angles are basically defined as follows [10][12]: − bi × ei + bi × ei, (6) An internal angle β(F , F ) is the fraction of the hyper- • 1 2 Xi=1 i=Xk+1 sphere S covered by the cone obtained by observing the where bi, 1 ≤ i ≤ l are nonnegative real numbers and face F2 from the face F1. The internal angle β(F1, F2) is defined to be zero when F1 * F2 and is defined to be k l b1 b2 bk one if F1 = F2. wibi = wibi = = ··· = An external angle γ(F , F ) is the fraction of the hy- w1 w2 wk • 3 4 Xi=1 i=Xk+1 persphere S covered by the cone of outward normals to Then the hyperplanes supporting the face F4 at the face F3. The external angle γ(F3, F4) is defined to be zero when x 2 l k 1 e−k k dx = β(F, G)Vl k 1 (S − − ) F3 * F4 and is defined to be one if F3 = F4. Z − − ConF ⊥,G Note that F here is a typical face of SP corresponding to ∞ r2 l k 1 (l k)/2 a typical set K. β(F, G) depends not only on the dimension × e− r − − dx = β(F, G) · π − , (7) Z of the face G, but also depends on the number of its vertices 0 l k 1 supported on K1 and K2. In other words if G is supported on a where Vl k 1(S − − ) is the spherical volume of the (l−k − − − l k 1 set L, then β(F, G) is only a function of |L∩K1| and |L∩K2|. 1)-dimensional sphere S − − . So we write β(F, G) = β(t1,t2) and similarly γ(G, SP ) = Proof: Omitted for brevity γ(t1,t2) where t1 = |L∩K1|−n1P1 and t2 = |L∩K2|−n2P2. From (7) we can find the expression for the internal angle. l k+1 Combining the notations and counting the number of faces G, Define U ⊆ R − as the set of all nonnegative vectors (4) leads to: (x1, x2, ··· , xl k+1) satisfying: − k 2 l 2 xp 0, 1 p l k + 1 ( wp)x1 = wpxp k+1 P (Ec) ≤ ≥ ≤ ≤ − p=1 p=k+1 − P P and define f(x1, ··· , xl k+1): U → ConF ⊥,G to be the t +t (1 − P1)n1 (1 − P2)n2 2 1 2 × linear and bijective map −  t1  t2  0 ≤ t1 ≤X(1 − P1)n1 0 ≤ t2 ≤ (1 − P2)n2 k l t1 + t2 > m − k + 1 cn f(x1, ··· , xl k+1)= − x1wpep + xp k+1wpep β(t1,t2)γ(t1,t2) + O(e− ) (5) − − Xp=1 p=Xk+1 for some c > 0. As n → ∞ each term in (5) behaves Then like exp{nψcom(t1,t2)−nψint(t1,t2)−nψext(t1,t2)} where ψcom ψint and ψext are the combinatorial exponent, the internal angle exponent and the external angle exponent of the x′ 2 f(x) 2 e−k k dx′ = e−k k df(x) each term respectively. It can be shown that the necessary and Z Z ConF ⊥,G U sufficient condition for (5) to tend to zero is that ψ(t1,t2)= f(x) 2 ψcom(t1,t2)−ψint(t1,t2)−ψext(t1,t2) be uniformly negative = |J(A)| e−k k dx2 ··· dxl k+1 ZΓ − for all t1 and t2 in (5). (Pk w2)x2 Pl w2x2 In the following sub-sections we will try to evaluate the = |J(A)| e− p=1 p 1− p=k+1 p p−k+1 dx2 ··· dxl k+1 internal and external angles for a typical face F , and a face ZΓ − (8) G containing F and try to give closed form upper bounds for them. We combine the terms together and compute the Γ is the region described by exponents using Laplace method in section IV and derive k l thresholds for nonnegativity of the cumulative exponent using. 2 2 ( wp)x1 = wpxp k+1, xp ≥ 02 ≤ p ≤ l−k+1 (9) − A. Derivation of the Internal Angles Xp=1 p=Xk+1 Suppose that F is a typical (k − 1)-dimensional face of the where |J(A)| is due to the change of integral variables and skewed cross-polytope is essentially the determinant of the Jacobian of the variable n n transform given by the l × l − k matrix A given by: SP = {y ∈ R | kykw1 = wi|yi|≤ 1} Xi=1 1 2 − wiw 1 ≤ i ≤ k, 1 ≤ j ≤ l − k supported on the subset K with |K| = k ≈ n1P1 + n2P2. Let Ω k+j G be a l − 1 dimensional face of SP supported on the set L Ai,j =  wi k +1 ≤ i ≤ l, j = i − k (10)  0 Otherwise with F ⊂ G. Also, let |L ∩ K1| = t1 and |L ∩ K2| = t2.  1 100 w = 6 w2 = 4 2 0.95 w = 3 90 2

0.9 80

70 w = 2 0.85 2 60 1

P 0.8 50

0.75 40 Recovery Percentage 30 0.7 20 0.65 10 w = 1 0 2 1 1.5 2 2.5 3 0.5 0.6 0.7 0.8 0.9 1 W P 2 1

Fig. 3: Successful recovery percentage for different weights. P2 = 0.1 and m = Fig. 1: Recoverable P1 threshold as a function of W2. P2 = 0.1, m = 0.75n 0.75n

k 2 T where Ω = p=1 wp. Now |J(A)| = det(A A). By finding the eigenvaluesP of AT A we obtain: p Then the outward normal cone c(G, SP) at the face G is the positive hull of these normal vectors. Thus 2 2 Ω+ t1W + t2W t1 t2 1 2 2 2 |J(A)| = W1 W2 (11) x n l ∞ r n l r Ω e−k k dx = γ(G, SP )Vn l(S − ) e− r − dx Z − Z Now we define a random variable c(G,SP) 0 (n l+1)/2 k l = γ(G, SP).π − , (14) 2 2 n l Z = ( wp)X1 − wpXp k+1 where is the spherical volume of the - − Vn l(S − ) (n − l) p=1 p=k+1 − n l X X dimensional sphere S − . Now define U to be the set where X1,X2, ··· ,Xl k+1 are independent random vari- n l+1 − 1 {x ∈ R − | xn l+1 ≥ 0, |xi/wi|≤ xn l+1, 1 ≤ i ≤ (n−l)} ables, with Xp ∼ HN(0, 2 ), 2 ≤ p ≤ (l − k + − − 2wp+k−1 and define f(x1, ··· , xn l+1): U → c(G, SP) to be the 1), as half-normal distributed random variables and X1 ∼ − 1 linear and bijective map N(0, k 2 ) as a normal distributed random variable. Then 2 Pp=1 wp n l n by inspection, (8) is equal to − f(x1, ··· , xn l+1) = xiei + wixn l+1ei. − − CpZ (0). Xi=1 i=nXl+1 − where pZ (·) is the probability density function for the random Then variable Z and pZ (0) is the probability density function pZ (·) ′ 2 2 evaluated at the point Z =0, and e−kx k dx′ = J(A) e−kf(x)k dx c(G,SP) | | U l k Z Z √πl−k+1 1 ∞ w1xn−l+1 wn−lxn−l+1 C w2 J A = J(A) = l−k p ( ) 2 wq v | | | | 0 −w x · · · −w x q=k+1 up=1 Z Z 1 n−l+1 Z n−l n−l+1 Y uX 2 2 n 2 2 l−k+1 −x1−···−xn−l−(Pi=n−l+1 wi )xn−l+1 t e dx1 dxn−l+1 √π 2 2 = (n1P1 + t1)W +(n2P2 + t2)W (12) ∞ · · · l−k 1 2 n w2 x2 2 = J(A) e−(Pi=n−l+1 i ) q | | × Z0 W x (1−P1)n1−t1 W x (1−P2)n2−t2 Combining (7) and (8): 1 2 2 2 e−y dy e−y dy dx k−l −W x −W x 2 „Z 1 « „Z 2 « β(t1,t2)= π CpZ (0) (13) W x r W x r ∞ 1 1 2 2 2 ξ 2 ξ 2 = 2n−l e−x e−y dy e−y dy dx, B. Derivation of the External Angle 0 0 0 Z Z ! Z ! Without loss of generality, assume K = {n−k+1, ··· ,n}. (15)

Consider the (l − 1)-dimensional face n 2 where ξ = ξ(t1,t2)= i=n−l+1 wi , r1 = (1 P1)n1 t1 r2 = en l+1 en k en k+1 en − − G = conv{ − , ..., − , − , ..., } T (1 P2)n2 t2. |J(Aq)|P= det(A A)= ξ is resulting from wn l+1 wn k wn k+1 wn − − − − − the change of variable in thep integral. n l of the skewed cross-polytope SP. The 2 − outward normal vectors of the supporting hyperplanes of the facets containing IV. EXPONENT CALCULATION G are given by Using the Laplace method we compute the angle exponents. n l n They are given in the following theorems, the proofs of which − are omitted for brevity. we assume n = γ n, n = γ n and { jiwiei + wiei, ji ∈ {−1, 1}}. 1 1 2 2 Xi=1 p=Xn l+1 WLG W1 =1, W2 = W . − 100 100 w = 1 2 90 90 w = w* 2

80 80

70 70

60 60

w = 1 50 2 50 w = 1.5 Recovery Percentage 2 Recovery Percentage w = 2 40 2 40 w = 2.5 2 30 w = 3 30 2

20 20 0.2 0.25 0.3 0.35 0.4 0.2 0.25 0.3 0.35 0.4 P P 1 1 (a) (b)

Fig. 2: Successful recovery percentage for weighted ℓ1 minimization with different weights and suboptimal weights in a nonuniform sparse setting. P2 = 0.05 and m = 0.5n

x2 2 2 Theorem 1: Let t1 = t1′ n, t2 = t2′ n, g(x) = √π e− , is basically achieved by selecting the best weight of Figure x 2 2 y 2a for each single value of P1. Figure 3 shows the result of G(x) = e− dy. Also define C = (t1′ + γ1P1) + √π 0 simulations in another setting where and W 2(t + γ PR), D = γ (1 − P ) − t and D = γ (1 − P2 =0.1 m =0.75n 2′ 2 2 1 1 1 1′ 2 2 (similar to the setting of the previous section). It is clear from P2) − t2′ . Let x0 be the unique solution to x of the following: the figure that the recovery success threshold for P1 has been g(x)D1 Wg(W x)D2 shifted higher when using weighted ℓ minimization rather 2C − − =0 1 xG(x) xG(W x) than standard ℓ1 minimization. Note that this result very well matches the theoretical result of Figure 1. Then

2 REFERENCES ψext(t1,t2)= Cx0 − D1 log G(x0) − D2 log G(W x0) (16) 2 [1] David Donoho, “High-Dimensional Centrally Symmetric Polytopes with Theorem 2: Let b = t1+W t2 and ϕ(.) and Φ(.) be the Neighborliness Proportional to Dimension ”, Discrete and Computational t1+t2 standard Gaussian pdf and cdf functions respectively. Also Geometry , 102(27), pp. 617-652, 2006, Springer . [2] Emmanuel Candes and Terence Tao, “Decoding by linear programming”, let Q(s) = t1ϕ(s) + W t2ϕ(W s) . Define the function (t1+t2)Φ(s) (t1+t2)Φ(W s) IEEE Trans. on Information Theory, 51(12), pp. 4203 - 4215, December Mˆ (s) = − s and solve for s in Mˆ (s) = m . Let the 2005. Q(s) mb+Ω [3] David Donoho and Jared Tanner, “Neighborliness of randomly-projected 1 unique solution be s∗ and set y = s∗(b − ). Compute simplices in high dimensions”, Proc. National Academy of Sciences, Mˆ (s∗) t1 t1 102(27), pp. 9452-9457, 2005. the rate function Λ∗(y) = sy − Λ1(s) − Λ1(Ws) t1+t2 t1+t2 [4] W. Xu and Babak Hassibi, “ Over the Grassmann 2 s Manifold: A Unified Analytical Framework,” Forty-Sixth Annual Allerton at the point s = s∗, where Λ1(s) = 2 + log(2ϕ(s)). The internal angle exponent is then given by: Conference, September 23-26. [5] Mark Davenport, Michael Wakin, and Richard Baraniuk, “Detection m and estimation with compressive measurements”, Rice ECE Department ψint(t1,t2)=(Λ∗(y)+ y + log 2)(t1′ + t2′ ) (17) Technical Report TREE 0610, November 2006 2Ω [6] Shihao Ji, Ya Xue, and Lawrence Carin, “Bayesian compressive sensing,” As an illustration of these results, for P2 = 0.1 and m IEEE Trans. on , 56(6) pp. 2346 - 2356, June 2008 δ = n = 0.75 using Theorems 2 and 1 and combining the [7] M. Stojnic, F. Parvaresh and B. Hassibi, “On the reconstruction of block- exponents with the combinatorial exponent, we have calculated sparse signals with an optimal number of measurements”, Preprint 2008. the threshold for P for different values of w in the range [8] E. J. Cands, M. Wakin and S. Boyd. Enhancing sparsity by reweighted 1 2 l1 minimization. J. Fourier Anal. Appl., 14 877-905 [1, 3] , below which the signal can be recovered. The curve [9] Branko Gr¨unbaum, Grassmann angles of convex polytopes. Acta Math., is depicted in Figure 1. As expected, the curve is suggesting 121:pp.293-302, 1968. that in this setting weighted ℓ minimization boosts the weak [10] Branko Gr¨unbaum, Convex polytopes, volume 221 of Graduate Texts in 1 Mathematics. Springer-Verlag, New York, second edition, 2003. Prepared threshold in comparison with ℓ1 minimization. This is verified and with a preface by Volker Kaibel, Victor Klee and Gnter¨uM. Ziegler. in the next section by some examples. [11] L.A.Santal´o, Geometr´ıa integral en espacios de curvatura constante, Rep.Argetina Publ.Com.Nac.Energ´ı At´omica, Ser.Mat 1,No.1(1952) V. SIMULATION [12] Peter McMullen. “Non-linear angle-sum relations for polyhedral cones and polytopes”. Math. Proc. Cambridge Philos. Soc., 78(2):pp.247-261, We demonstrate by some examples that appropriate weights 1975. can boost the recovery percentage. We fix and [13] M. Stojnic, W. Xu, and B. Hassibi, “Compressed sensing - probabilistic P2 n =2m = analysis of a null-space characterization”, IEEE International Conference 200, and try ℓ1 and weighted ℓ1 minimization for various on Acoustics, Speech and Signal Processing, Pages:3377-3380, March 31 n values of P1. We choose n1 = n2 = Figure 2a shows 2008-April 4 2008. 2 [14] http://www.dsp.ece.rice.edu/cs/ one such comparison for P2 = 0.05 and different values of w2. Note that the optimal value of w2 varies as P1 changes. Figure 2b illustrates how the optimal weighted ℓ1 minimization surpasses the ordinary ℓ1 minimization. The optimal curve