COMPRESSED SENSING WITH CORRUPTED PARTICIPANTS

Meng Wang⋆, Weiyu Xu†, Robert Calderbank♯

⋆ Dept. of ECSE, Rensselaer Polytechnic Institute, Troy, NY. † Dept. of ECE, University of Iowa, Iowa City, IA. ♯ Dept. of Electrical Engineering, , Durham, NC.

ABSTRACT We propose to locate the failed links and recoverthe trans- mission delays on normal links simultaneously from a set of Compressed sensing (CS) theory promises one can recover nonadaptive path measurements. A path measurement is a real-valued sparse signal from a small number of linear mea- “failure” if it includes at least one failed link, since its pack- surements. Motivated by network monitoring with link fail- ets will be lost. Otherwise, we obtain the real-valued path ures, we for the first time consider the problem of recovering delay which is the sum of the link delays of links it passes signals that contain both real-valued entries and corruptions, through. We assume that the number of failed links and the where the real entries represent transmission delays on nor- number of nonzero link delays are both small. As far as we mal links and the corruptions represent failed links. Unlike know, recovering sparse signals that contain failures is a new conventional CS, here a measurement is real-valued only if problem and has not been systematically addressed before. it does not include a failed link, and it is corrupted other- We for the first time consider the problem of recover- wise. We prove that O((d + 1) max(d, k) log n) nonadaptive ing sparse signals that contain corruptions and formulate it measurements are enough to recover all n-dimensional sig- into a combined CS and GT problem (Section 2). We pro- nals that contain k nonzero real entries and d corruptions. We vide bounds of the number of measurements needed to re- provide explicit constructions of measurements and recovery cover such signals (Theorem 2) and compare it with CS and algorithms. We also analyze the performance of signal recov- GT (Table 1). We provide explicit measurement construction ery when the measurements contain errors. method as well as efficient recovery algorithms (Section 3). Index Terms— compressed sensing, group testing, funda- When the measurements are erroneous, the number of mea- mental limits, network tomography, corruptions. surements needed is also characterized (Section 4).

1. INTRODUCTION 2. PROBLEM FORMULATION

Compressed sensing (CS) [1–4] indicates that if an n- Let x R¯ n (R¯ = R ) denote the unknown signal to ∈ ∪ ∞ dimensional signal is k-sparse, i.e., it only has k nonzero recover. indicates a corruption. Let y = Mx denote ob- ∞ entries, then one can efficiently recover the signal from tained measurements, where M m×n is the measurement ma- O(k log(n/k)) nonadaptive linear measurements. Network trix. If xj = , then yi = for all i such that Mij =0. Set ∞ ∞ tomography [5–10] attempts to infer system internal char- (y) := i [[m]] : y = denotes corrupted measure- F { ∈ i ∞} acteristics (e.g., link queueing delays) of the from ments. Note that in conventional CS, x, y are real vectors. indirect end-to-end (aggregate) measurements (e.g., path [[q]] (q N) represents the set 1, ..., q . For set S [[q]], ∈ { } ⊆ delay measurements). Since only a small number of bottle- S denotes its cardinality, and Sc denotes its complimentary | | neck links experience large delays, some recent papers like set in [[q]]. Given S [[n]] and M, let (S) := i [[m]] : ⊆ N { ∈ [11–14] have considered the application of CS in network j S, s.t. M = 0 denote the set of indices of measure- ∃ ∈ ij } tomography, where the goal is to recover real-valued sparse ments that passes through at least one entry in S, let c(S) N link delays from a small number of path delay measurements. denote its complimentary set. For set A and B, A B denotes ∪ In communication networks, a link between two routers the union and A B contains elements that are in A but not in \ may fail either temporarily or permanently. If a link fails, all B. Given matrix M, MAB denotes the submatrix with row the packets that travel through it will be lost. Link failure indices in A and column indices in B. localization has been extensively investigated, e.g., [15–18], Definition 1. x R¯ n is d-corrupted k-sparse(simplified as where one attempts to locate the failed links from boolean (d,k)-type) if S ∈ d and T k, where S = j x = path measurements. A path measurement is a “success” if j and T = j |0 <|≤x < | |≤. { | ∞} it does not pass any failed links. Otherwise, it is treated as { | | j | ∞} a “failure”. This is a group testing (GT) problem [19], see Definition 2. Matrix M m×n is called (d,k)-type identifiable [20–23] as some recent examples of a rich literature. if and only if for every two (d,k)-type vectors x and z such Given 0-1 matrix M, let γ = min (j) and γ = Table 1. Number of nonadaptive measurements l j∈[[n]] u max (j) denote the minimum and maximum|N | number d corruptions O(d2 log n) [21,23] j∈[[n]] of non-zero|N entries| in a column. Given T [[n]], E(T ) := -sparse real signals [1,2,4] k O(k log n) (j) measures the total number of⊆ nonzero entries (d, k)-type signals O((d+1) max(d, k) log n) (here) j∈T |N | in the columns in T .

Definition 4 (Expander). M corresponds to a (k,δ,γl,γu)- that x = z, it holds that Mx = Mz. expander (δ (0, 1)) if (T ) (1 δ)E(T ) for every T [[n]] with∈T k. |N | ≥ − A(d,k)-typevector indicates that there are at most d failed ⊆ | |≤ links and at most k links with nonzero transmission delays. We say M corresponds to a (k,δ,γ)-expander if γ = Throughout the paper, we consider the “for all” performance l γ = γ. If M corresponds to a (2k,δ,γ ,γ )-expander for that requires M to identity all (d,k)-type vectors. u l u δγ /γ < 1/6 [25], then one can correctly recover k-sparse For example, consider matrix u l signals via ℓ1-minimization, which returns the vector with the 11000 least ℓ1-norm among all the vectors that can produce the ob-  01100  tained measurements. There exist both random and explicit M = 00110 constructions of expanders.    00011    Proposition 1. [26] For any 1 k n/2, ǫ > 0, one can ≤ ≤ O(1) One can check that can identify all -sparse signals in 5 explicitly construct a (k,ǫ,γ)-expander with m = kγ/ǫ M 2 3 when there is no corruption, i.e., M is (0, 2)-type identifi-R and γ =2O((log(log(n)/ǫ)) ). able. However, when there exists one corruption, e.g., x = [0, 0.5, , 0, 0]T , we have y = Mx = [0.5, , , 0]T . Al- 3. RECOVERY OF CORRUPTED SPARSE SIGNALS ∞ ∞ ∞ though from y and M, we can infer that x3 = and lo- ∞ m×n cate the corruption, we cannot decide whether x1 = 0.5 or In network tomography, M is naturally a 0-1 matrix since x2 =0.5. Thus, M is not (1,1)-type identifiable. a path delay measurement is an aggregate sum of the corre- When d = 0, the problem reduces to CS problem where sponding link delays. A lower bound of m for 0-1 matrix M one aims to recover k-sparse signals. When k =0, it reduces to be(d,k)-type identifiable is stated as follows. to the GT problem where one wants to locate d failures from boolean measurements. Here we need to not only locate the Proposition 2. A 0-1 (d,k)-type identifiable matrix M has at least [d log(n/d)+ k log((n d)/k)]/ log(k + 2) rows. d corruptions but also recover the uncorrupted values, among − which at most k entries are non-zero. Table 1 compares our Proof. Consider (d,k)-type vectors that all the non-zero finite result here on the number of measurements needed with ex- values are ‘1’. There are A := d k n n−i such isting results in GT and CS. The results in GT and CS can be i=1 j=1 i j vectors. In this case, each measurement could be an integer viewed as special cases for our generalized result. from 0 to k, or . There are at most B := (k + 2)m possible We remarkthatthe Choir code in [24]is (1, k)-identifiable outcomes. We need∞ B A, and the claim follows. for some k. But the construction is not directly extendable ≥ to general d and does not attempt to reduce m (m = n in Next we consider the upper boundsof the numberof mea- [24]). Here for any given n, d, and k, we want to design surements needed. We start with a sufficient condition for (d, k) identifiable M with m as small as possible. (d,k)-type identifiable matrices. We first introduce disjunct matrices in GT and expanders in CS that will be useful for our analysis. Definition 5. M is called as (d, 2k,δ,γl,γu) if for every S [[n]] with S d, there existsG G c(S) such that the Definition 3 (Disjunct matrices). M is called (d, e)-disjunct ⊆ ′ | |≤ ⊆ N submatrix M = M c is a (2k,δ,γ ,γ )-expander. if for every S [[n]] and every i [[n]] such that i / S and GS l u ⊆ ∈ ∈ S d, (i) (S) >e holds. Theorem 1. A (d, 2k,δ,γ ,γ ) matrix M is (d,k)-type iden- | |≤ |N \N | l u tifiable if δγ /γG < 1/6. A (d, 0)-disjunct matrix is called d-disjunct for simplifi- u l cation. One can locate up to d corruptions with a d-disjunct Proof. Since γl > 0, M is d-disjunct, and one can cor- matrix [21]. The locating algorithm is simple [21]: xj is iden- rectly identify up to d corruptions. Since there always ex- tified to be corrupted if and only if (j) (y). ist some uncorrupted measurements that correspond to a We remark that a (d +2k)-disjunctN matrix⊆F is a (d,k)-type (d, 2k,γl,γu)-expander, then all the real-valued entries can identifiable, and one can construct a (d +2k)-disjunct matrix be correctly recovered via ℓ1-minimization. with O((d +2k)2 log n) measurements [21,23]. This number is larger than our result in Table 1 when k >> d. We focus One important property for (d, 2k,δ,γl,γu) matrices is on the region that n>>k>>d in this paper. that we have a polynomial algorithmG for recovering x. The Algorithm 1 Recovery algorithm for error free case Given T Sc with T = t, from Lemma 1, we have ⊆ | | Input: y, M −δ2 − −p t r t [1 (1 ) ] 1 i, x is identified as corrupted iff (i) (y). P r[ (T ) (1 δ/8)[1 (1 p) ]r] e 128 , ∀ i N ⊆F |N |≤ − − − ≤ 2 Let D be the set of identified corruptions. R = c(D). (4) N −δ2ptr/192 3 x = aug minz z s.t. M c z = y . P r[E(T ) (1 + δ/8)ptr] e , (5) r 1 RD R ≥ ≤ 4 Return: Corruptions D, uncorrupted values xr. where (T ) and E(T ) are defined respect to matrix MGSc . Since pN= δ/ max(d, 2k), through Taylor expansion, one can check that for all t 2k, it holds that recovery algorithm is summarized in Algorithm 1. It first ≤ (1 δ/8)[1 (1 p)t]r (1 δ)(1 + δ/8)ptr. (6) applies the identification algorithm in GT to locates up to d − − − ≥ − corruptions, which takes time O(nm). Then it recovers the From (4) to (6) and the Union bound, we have uncorrupted real values with ℓ1-minimization, which has run- −c δ2p|T |r ning time O(n3). For comparison, a combinatorial search al- P r[ (T ) (1 δ)E(T ), given T ] e 1 , (7) |N |≤ − ≤ gorithm to recover x takes time O(nO(d+k)). where c1 is a constant independent of δ1, p, k, and n. Now we present one main result regarding the number of From the Union bound, we have measurements needed for M to be (d, 2k,δ,γl,γu). G 2k m×n c c n Theorem 2. M is a 0-1 matrix with i.i.d. entries Mij , P r[E ] P r[D ]+ P r[ (T ) (1 δ)E(T ), r ≤ r t |N |≤ − and P (M = 1) = p = δ/ max(d, 2k), where constant δ t=1 ij ∈ (0, (√73 7)/12). If m = O(d max(d, k) log n), then with given T with T = t] − | | probability 1 o(1), M is (d, 2k,δ,γl,γu) with δγu/γl < 2k − G 2 2 1/6,and is thus (d,k)-type identifiable. 2ne−δ pr/3 + et(log(n/t)+1)−c1δ ptr, (8) ≤ Proof. Pick any ǫ (0, 1). Let I denote the event that for t=1 ∈ m every set S [[n]] with S d, it holds that (1) c(S) where the second inequality follows from (3) and (7). ⊆ d | | ≤ |Nc | ≥ c (1 ǫ)(1 p) m, and (2) for a fixed G (S) with Plugging (8) and (2) into (1), we have P r[Im] 0 when − − d ⊆ N → G = (1 ǫ)(1 p) m, MGSc corresponds to a (2k, δ, (1 n , provided that δ| )p| G , (1− + δ)p−G )-expander. − → ∞ | | | | m 2(d log(n/d) + log n)/(p(1 ǫ)(1 p)dδ2). Since δ(1+δ)/(1 δ) < 1/6 from the assumption, clearly ≥ − − − d if Im happens, the claim holds. We will prove that when m is Since p = δ/ max(d, 2k), then (1 p) 1/4. Then when as stated, P r[I ] goes to 1 as n goes to infinity. − ≥ m m 8 max(d, 2k)(d log(n/d) + log n)/δ3, Given S with S = s, let Fs denote the event that ≥ c(S) (1 ǫ)(1| | p)dm holds. Given G c(S) with with probability 1 o(1), M is (d,k)-type identifiable. |N |≥ − − ⊆ N − G = r, let E denote the event that M c corresponds to r GS Theorem 2 indicates that a randomly generated 0-1 matrix a| (2| k, δ, (1 δ)p G , (1 + δ)p G )-expander. Since M has with O((d + 1) max(d, k) log n) measurements is (d,k)-type i.i.d. entries,− once| s|and r are| fixed,| P r[F ] and P r[E ] do s r identifiable with high probability. We compare this result with not depend on S and G. From the union bound, exiting ones in CS and GT in Table 1. We next provide an ex- d plicit measurement construction method based on expanders. c n c c P r[I ] P r[F ]+ P r[E d ] . (1) m ≤ s s (1−ǫ)(1−p) m Theorem 3. M is (d,k)-type identifiable if it corresponds to s=1 a -expander with ǫ(d+1) . c (d +2k,ǫ/d,γ) d−ǫ(d+1) < 1/6 We will next calculate P r[Im]. The following form of Chernoff bound [27] is applied in our analysis. Proof. S with S = s d, and T Sc with T = t 2k, from∀ the expansion| | property≤ and∀ ⊆(S) sγ,| we| have≤ Lemma 1. Let X be the sum of n independent random vari- |N |≤ ables x 0, 1 , and let be its expectation. δ (0, 1), (S T ) (S) (1 ǫ(d + 1)/d)tγ. (9) i ∈{ } ∀ ∈ |N ∪ | − |N |≥ − 2 2 − δ µ − δ µ Then the number of non-zero entries in each column i in P r[X > (1+δ)] e 3 , and P r[X < (1 δ)] e 2 . M c c is between (1 ǫ(d + 1)/d)γ and γ. ≤ − ≤ N (S)S − Given S with S = s d, from Lemma 1, we have Since MN c(S)T has at most tγ non-zero entries, from | | ≤ ǫ(d+1) c c c c −ǫ2(1−p)dm/2 (9) one can check that MN (S)S is a (2k, d , (1 P r[Fs ] P r[Fd ] e . (2) ǫ(d+1) − ≤ ≤ d )γ,γ)-expander. The claim follows. Given G c(S) with G = r, let D denote the event that ⊆ N | | r From Theorem 3 with Proposition 1, an explicit con- the number of nonzero entries in every column of MGSc is in struction of (d,k)-type identifiable matrix uses O(dO(1)(d + [(1 δ)pr, (1 + δ)pr]. From Lemma 1 and the Union bound, 3 − 2k)2O((d log(log(n)/ǫ)) )) measurements, and this number is 2 2 2 P r[Dc] ne−δ pr/3 + ne−δ pr/2 2ne−δ pr/3. (3) larger than that in Theorem 2 with random construction. r ≤ ≤ Algorithm 2 Recovery algorithm for up to h errors 5. NUMERICAL RESULTS Input: y, M 1 For each S [[n]] with S d, if (S) (y) + ⊆ | | ≤ |N \F | 0.9 (y) (S) h, S is the set of corruptions, denoted 0.8 m=600 |F \N | ≤ m=800 by D. 0.7 0.6 2 R = [[n]] ( (D) (y)), Mr = MRDc . \ N ∪F 0.5 3 x = aug minz z s.t. M z = y . 0.4 r 1 r R 4 Return: Corruptions D, uncorrupted values xr. 0.3 0.2

Percentage of unsuccessful recovery 0.1

0 0 5 10 15 20 25 30 35 40 4. ERRONEOUS MEASUREMENTS Number of corruptions

We next consider the case the measurements contain errors. Fig. 1. Identification of corruptions αT Let i denote the ith row of M. We consider two types of αT errors: (1) yi = when i x , and (2) yi when αT ∞ ∈ R ∈ R i x = . We assume that the total number of these two n=1000, m=600 ∞ 0.7 types of errors is at most h, and these errors can happen at d=15 0.6 d=10 d=5 arbitrary unknown locations. The goal is to design measure- d=0 0.5 ment matrix M such that all (d, k)-type signals can be cor- 0.4 rectly recovered no matter where the h errors are. 0.3

If M is a (d, 2h + 1)-disjunct matrix, then one can iden- 0.2 tify d corruptions in the presence of at most h errors [21]. 0.1

Then one sufficient condition for identifying (d,k)-type sig- Normalized recovery error for uncorrupted part 0 0 100 200 300 400 500 nals from measurements that contain h errors is as follows, Number of nonzero real values

Proposition 3. If M is (d, 2h + 1)-disjunct, and for every S Fig. 2. Recovery of sparse signals with corruptions of up to d corruptions and for every H with up to h errors, there exists G [[m]] ( (S) H) s.t. M c corresponds to ⊆ \ N ∪ GS a (2k,δ,γl,γu)-expander with δγu/γl < 1/6, then all (d, k)- We fix n = 1000 and the number of ‘1′s in each column type signals can be recovered in the presence of h errors. of M to be 5, and randomly generate a 0-1 matrix M with The proof follows clearly from previous discussions and m = 600 and m = 800 respectively. We first consider the is skipped. The recovery algorithm for such matrices is stated performance of identifying corruptions in Fig. 1. For each in Algorithm 2. We prove that these matrices can be obtained d, we randomly choose the locations of the corruptions, and through random construction with the same probability p as the results are averaged over 500 runs. When m = 600, 10 that in Theorem 2. The bound of the numberof measurements corruptions can be correctly identified. needed is as follows. InFig. 2,wefix d and increase the numberof non-zeroen- tries k. The locations of corruptions and non-zero entries are Theorem 4. One can identify all (d,k)-type signals from m = randomly chosen, and non-zero entries are sampled as i.i.d. O(max(d, k)(d log n + h log(max(d, k)) + h log log n) mea- Gaussian random variables. Algorithm 1 is applied to recover surements that contain errors in arbitrary locations. h (d,k)-type signals. Let x∗ contain the uncorrupted entries, Proof. The proof follow the same line as that for Theorem and let x denote our reconstruction. x∗ x / x∗ is r − r2 2 2, and we skip the details. Let Iˆm denote the event that for the normalized recovery error of the uncorrupted part. The every set S with S d and every set H with H h, results are averaged over 100 runs. When m = 600, we can c | | ≤ d | | ≤ (1) (S) (1 ǫ)(1 p) m, (2) M c c has at least recover all (5, 220)-type signals or all (10, 200)-type signals. N ≥ − − N (S)S (1 δ)p(1 ǫ)(1 p)dm nonzero entries in each column, − − − ′ ′ (3) MG′Sc corresponds to a (2k, δ, (1 δ)pr , [(1 + δ2)pr )- 6. CONCLUSION expander for a fixed G′ in c(S) H −with G = r′ = (1 d N \ | | − ǫ)(1 p) m h entries. If Iˆm happens, and if it holds that We considered recovering sparse link delay values from path − − delay measurements in the presence of link failures and for 2h +1 (1 δ)p(1 ǫ)(1 p)dm, (10) ≤ − − − the first time formulated it into a CS problem with corrup- then one can identify all (d,k)-type signals from m measure- tions. We provided bounds of the number of nonadaptive ments that contain at most h errors at arbitrary locations. measurements needed to identify both corruptions and real ˆc One can check that P r[Im] 0 when n provided entries. Explicit constructions and efficient recovery algo- that m is as stated in the Theorem.→ And (10) follows→ ∞ for this rithms are also provided. One ongoing work is to explore choice of m. Then the claim follows. construction methods with fewer measurements. 7. REFERENCES [15] P. Babarczi, J. Tapolcai, and P. Ho, “Adjacent link failure localization with monitoring trails in all-optical [1] E. Cand`es and T. Tao, “Decoding by linear program- mesh networks,” IEEE/ACM Trans. Netw., vol. 19, no. ming,” IEEE Trans. Inf. Theory, vol. 51, no. 12, pp. 3, pp. 907 –920, 2011. 4203–4215, 2005. [16] M. Cheraghchi, A. Karbasi, S. Mohajer, and [2] E. Cand`es and T. Tao, “Near-optimal signal recovery V. Saligrama, “Graph-constrained group testing,” from random projections: Universal encoding strate- IEEE Trans. Inf. Theory, vol. 58, no. 1, pp. 248–262, gies?,” IEEE Trans. Inf. Theory, vol. 52, no. 12, pp. 2012. 5406–5425, 2006. [17] N.J.A. Harvey, M. Patrascu, Yonggang Wen, [3] D. Donoho, “Compressed sensing,” IEEE Trans. Inf. S. Yekhanin, and V.W.S. Chan, “Non-adaptive Theory, vol. 52, no. 4, pp. 1289–1306, 2006. fault diagnosis for all-optical networks via combinato- [4] D. Donoho, “High-dimensional centrally symmetric rial group testing on graphs,” in Proc. IEEE INFOCOM, polytopes with neighborliness proportional to dimen- 2007, pp. 697 –705. sion,” Discrete Comput. Geom., vol. 35, no. 4, pp. 617– [18] J. Tapolcai, B. Wu, P. Ho, and L. R´onyai, “A novel 652, 2006. approach for failure localization in all-optical mesh net- [5] T. Bu, N. Duffield, F. L. Presti, and D. Towsley, “Net- works,” IEEE/ACM Trans. Netw., vol. 19, pp. 275–285, work tomography on general topologies,” in Proc ACM 2011. SIGMETRICS, 2002, pp. 21–30. [19] R. Dorfman, “The detection of defective members of [6] A. Coates, A.O. Hero III, R. Nowak, and B. Yu, “Inter- large populations,” Ann. Math. Statist., vol. 14, pp.436– net tomography,” IEEE Signal Process. Mag., vol. 19, 440, 1943. no. 3, pp. 47–65, 2002. [20] M. Cheraghchi, A. Hormati, A. Karbasi, and M. Vetterli, [7] Y. Chen, D. Bindel, H. H. Song, and R.H. Katz, “Group testing with probabilistic tests: Theory, design “Algebra-based scalable overlay network monitoring: and application,” IEEE Trans. Inf. Theory, vol. 57, no. Algorithms, evaluation, and applications,” IEEE/ACM 10, pp. 7057 –7067, 2011. Trans. Netw., vol. 15, no. 5, pp. 1084 –1097, 2007. [21] D. Du and F. K. Hwang, Combinatorial Group Testing [8] N. Duffield, “Network tomography of binary network and Its Applications (), World Sci- performance characteristics,” IEEE Trans. Inf. Theory, entific Publishing Company, second edition, 2000. vol. 52, no. 12, pp. 5373 –5388, 2006. [22] A.C. Gilbert, B. Hemenway, A. Rudra, M.J. Strauss, and [9] A. Gopalan and S. Ramasubramanian, “On identifying M. Wootters, “Recovering simple signals,” in Proc. ITA, additive link metrics using linearly independent cycles 2012, pp. 382 –391. and paths,” IEEE/ACM Trans. Netw., vol. 20, no. 3, pp. [23] E. Porat and A. Rothschild, “Explicit non-adaptivecom- 906–916, 2012. binatorial group testing schemes,” in Proc. ICALP, pp. [10] H. X. Nguyen and P. Thiran, “Using end-to-end data 748–759. 2008. to infer lossy links in sensor networks,” in Proc. IEEE [24] L. Applebaum, W.U. Bajwa, R. Calderbank, and INFOCOM, 2006, pp. 1 –12. S. Howard, “Choir codes: Coding for full duplex in- [11] M. Coates, Y. Pointurier, and M. Rabbat, “Compressed terference management,” in Allerton, 2011, pp. 1–8. network monitoring for ip and all-optical networks,” in [25] R. Berinde, A. Gilbert, P. Indyk, H. Karloff, and Proc. ACM SIGCOMM IMC, 2007, pp. 241–252. M. Strauss., “Combining geometry and combina- [12] J. Haupt, W.U. Bajwa, M. Rabbat, and R. Nowak, torics: a unified approach to sparse signal recovery,” “Compressed sensing for networked data,” IEEE Signal :0804.4666, 2008. Process. Magazine, vol. 25, no. 2, pp. 92 –101, 2008. [26] M. Capalbo, O. Reingold, S. Vadhan, and A. Wigderson, [13] W. Xu, E. Mallada, and A. Tang, “Compressive sensing “Randomness conductors and constant-degree lossless over graphs,” in Proc. IEEE INFOCOM, 2011. expanders,” in Proc. ACM STOC, 2002, pp. 659–668. [14] M. Wang, W. Xu, E. Mallada, and A. Tang, “Sparse re- [27] M. Mitzenmacher and E. Upfal, Probability and com- covery with graph constraints: Fundamental limits and puting: Randomized algorithms and probabilistic anal- measurement construction,” in Proc. IEEE INFOCOM, ysis, Cambridge University Press, 2005. 2012, pp. 1871 –1879.