Non-Line-of-Sight Localization Using Low- + Sparse Decomposition

Venkatesan. N. Ekambaram and Kannan Ramchandran

Department of EECS, University of California, Berkeley Email: venkyne, kannanr @eecs.berkeley.edu { }

sensors Abstract—We consider the problem of estimating the locations of a of points in a k-dimensional euclidean space given a subset NLOS of the pairwise distance measurements between the points. We reflections focus on the case when some fraction of these measurements can be arbitrarily corrupted by large additive noise. This is motivated distance/angle by applications like sensor networks, molecular conformation and measurement manifold learning where the measurement process can induce 26 CHAPTER 1. OVERVIEW large bias errors in some fraction of the distance measurements due to physical effects like multipath, spin-diffusion etc. Given the NP-completeness of the problem, we propose a convex relaxation (a) Sensor Localization. (b) Molecular conformation. that involves decomposing the partially observed matrix of distance measurements into low-rank and sparse components, wherein the low-rank component corresponds to the Euclidean and the sparse component is a matrix of biases. Using recent results from the literature, we show that this convex relaxation yields the exact solution for the class of fixed radius random geometric graphs. We evaluate the performance of the on an experimental data set obtained from a network of 44 nodes in an indoor environment and show that the algorithm is robust to non-line-of-sight bias errors. Keywords: Non-Line-of-Sight localization, robust . (c) Manifold Learning. I.INTRODUCTION The problem of obtaining the locations of a set of points Figure 1. (a) Sensors placed in a field for monitoring the region. (b) Tryptophan,Figure one 5:ofSwiss the20 roll standardfrom Weinberger amino & Saul acids [374 [8].]. The (c) problem (1) A of high manifold dimensional given pairwise distances between the points is a topic of “swiss roll”learning, data set illustrated that spans for N = a800 lower datadimensional points sampled from hypersurface a “Swiss roll” [9].1 . (2) Data significant research interest. The problem has applications in points withinAdiscretizedmanifoldisrevealedbyconnectingeachdatap a local radius are connected. (3) The lowerointdimensional and its k =6 repre- nearest neighbors 2 . An unsupervised learning algorithm unfolds the Swiss a broad spectrum of areas such as sensor networks, molecular sentation ofroll the while high preserving dimensional the local geometry surface of nearby preserving data points the3 relative. Finally, geometry the of the points. data points are projected onto the two dimensional subspace that maximizes biology, data analysis, manifold learning etc. In sensor their variance, yielding a faithful embedding of the original manifold 4 . networks, the locations of different sensor nodes need to be estimated, given the distance measurements between nodes focusing on localization tailored to the problem of interest. that are within some communication radius of each other [1] The problem is shown to be NP-complete [4] in the general (Fig. 1(a)). The structure of a protein molecule is determined case wherein one is provided with an arbitrary subset of by estimating the distances between the component atoms of the pairwise distance measurements and is asked to find the molecule using techniques such as NMR spectroscopy [2] a valid configuration of points satisfying these distance (Fig. 1(b)). Many applications that involve processing massive measurements. This is hard even when one has the side- data sets in high dimensions require efficient representations information that there is a unique set of points that satisfy of the data in a low dimensional space. Most of these data the given distances. Hence, most of the work in the literature sets tend to span a low dimensional hypersurface in the focus on developing efficient localization with higher dimensional space. Pairwise proximity measurements provable theoretical guarantees for specific node geometries. between the data points could be used to obtain an efficient This is when the distance measurements are either exact or representation of the data in lower dimensions preserving the slightly perturbed, which is the case for line-of-sight (LOS) relative conformation of the points [3] (Fig. 1(c)). localization [5], [6]. Existing theoretical results [7], [6] have shown that LOS localization can be achieved in polynomial Given the range of applications, significant research work time for random geometric graphs. is devoted in the literature dealing with theory and algorithms

This project is supported in part by AFOSR grant FA9550-10-1-0567 We consider a generalized version of the original LOS x2 2 2 0 x1 x2 x1 x3 2 || − || || − ||2 x1 D = x1 x2 0 x2 x3  ||x − x ||2 x x 2 || −0 ||  ˜ || 1 − 3|| || 2 − 3|| x3 X = LX x˜2  2 2 2 2  2 2 2 T T x1 x1 x1 x1 x2 x3 x1 x1 x2 x1 x3 || ||2 || ||2 || ||2 || ||2 || ||2 || ||2 || T || 2 T x˜1 = x2 x2 x2 + x1 x2 x3 2 x2 x1 x2 x2 x3 3  || ||2 || ||2 || ||2   || ||2 || ||2 || ||2  −  T || T || 2  1 x3 x3 x3 x1 x2 x3 x3 x1 x3 x2 x3 x˜ = x x || || || || || || || || || || || || || || i i − 3 j x˜3  x 2  1   x 2 xT x xT x  j=1 1 1 1 2 1 3 ￿ || ||2 2 2 2 || T || 2 T = x2 111 + 1 x1 x2 x3 2 x2 x1 x2 x2 x3  ||x ||2   1  || || || || || || −  xT x ||xT x|| x 2  || 3|| ￿ ￿ ￿ ￿ 3 1 3 2 || 3|| =  vecdiag (XXT )  1T + 1vecdiag(XXT )T 2XXT   − !"#$%&"'()"*+%(-1/,-(-/&2%'$*,$-$#% !"#$%&"'()"*+%'$*,$-$#%(,%,.$%"-/0/*% column vector of diagonal entries ￿ ￿￿ ￿ Figure 2. Centering the node locations. Figure 3. Relating the distance matrix and the node locations. localization problem, wherein we assume that some fraction of the distance measurements can have large bias errors. This problem is traditionally known as non-line-of-sight (NLOS) N×N Let D R be the matrix of all possible pairwise squared ∈ localization. The motivation for this problem setup arises distances, i.e. D = x x 2, where xT is the ith row of X. ij || i− j|| i from real world physical processes that cause bias errors Let Dˆ be the matrix of distance measurements that is partially in the measurements. For example, multipath interference observed. We will assume the following measurement model, causes huge bias errors in the distance measurements obtained between nodes in a sensor network or from the GPS satellites Dˆ = D + B + N (i, j) x x r, ij ij ij ij ∀ | || i − j|| ≤ in vehicular networks [11]. The spin-diffusion phenomenon in where B is the bias in the measurement due to NLOS NMR spectroscopy causes apparent shrinkage of the distance ij errors and N is the thermal noise. B is taken to be the measurements in protein molecular conformation applications. ij matrix of biases. We will assume that only some fraction (α) Outliers in the data can cause large bias errors in applications of the entries B , (i, j) x x r are non-zero. like manifold learning. Thus there has been significant recent { ij | || i − j|| ≤ } Further we will also assume that the thermal noise is a small interest in developing algorithms that tackle NLOS errors perturbation that is bounded by a quantity ∆. [11], [?]. However, there are no theoretical guarantees for this problem to the best of our knowledge. We need to relate the matrix D and the node locations There is some existing work in the literature that formulates X. Clearly, D is invariant to rigid transformations of X, LOS localization as a matrix completion problem [14]. The i.e. rotations, translations and reflections of the relative node problem boils down to completing a low-rank matrix of placements. Hence we need to define a metric for comparing squared distances known as the Euclidean Distance Matrix node location estimates that is invariant to these rigid trans- (EDM) and there are efficient convex relaxations that can be formations. The following equation relates the distance matrix T T used for matrix completion. Our contribution in this work is D and the node locations X, D = vecdiag(XX )1 + 1vecdiag(XXT )T 2XXT , where the vecdiag operation the following: − creates a column vector of the input matrix by stacking the (a) We formulate the NLOS localization problem as one diagonal entries and 1 is a vector of ones (see Fig. 3). We of matrix decomposition wherein we are interested in will now derive one possible set of node locations given a decomposing the partially observed matrix of corrupted distance matrix D. Since D is invariant to translations, let pairwise squared distances into a low rank EDM and a us normalize the node locations so that they are centered at sparse matrix of bias errors. 1 T N×N the origin. Define L = IN 11 R . Note that (b) Using existing results from the matrix decomposition − N ∈ X˜ = LX literature, we show that the proposed relaxation will is a set of node locations that are centered at the 1 achieve exact localization for a random geometric graph origin (see Fig 2). Since lies in the left and right null spaces L D with radius greater than a threshold that is a function of of , by multiplying the expression for on both sides by L, we get that 1 LDL = LX(LX)T . Thus given a fully the fraction of LOS measurements. − 2 observed noiseless distance matrix D, one can obtain a set of (c) We evaluate the performance of the algorithm on a real- 1 X˜ = UΣ 2 world dataset obtained from a static network of 44 nodes node locations centered at the origin by choosing , where 1 LDL has the singular value decomposition (SVD), in an indoor environment. − 2 1 LDL = UΣU T . In order to compare two possible node − 2 II.PROBLEM SETUP location estimates, X and Xˆ, we will use the following error T ˆ ˆ T Consider the case where we have N points/nodes located metric LXX L LXX L . Note that LX takes care of the || − T || inside a unit cube in the k-dimensional euclidean space. Let translations and XX is invariant to rotations and reflections. X RN×k be the vector of all node locations, where each row ∈ represents a point in the k-dimensional space. Let us assume With this background we can now proceed with the problem that these points are placed uniform randomly in this space and formulation. Note that the rank of D RN×N is at most k+2 ∈ we have distance measurements between any two points that irrespective of the number of nodes N. Thus the problem of are within a radius r of each other. Note that given only these interest is to decompose the partially observed measurement distance measurements, one can at most hope to obtain the matrix D˜, as a low-rank matrix D and a sparse matrix B. conformation of points congruent to rotations and translations. The reason to look for a sparse matrix B is in the hope 4

Algorithm 1 Matrix decomposition for NLOS localization !"#$%&'(#%$) 1: Input: distance measurements d˜ij, (i, j) , di- b2 ∈E mension k, bound on the measurement noise ∆. l1ball 2:

(D,ˆ Bˆ)=arg min D + λ B 1 b1 || ||∗ || || s.t. D + B D˜ ∆ (i, j) || ij ij − ij|| ≤ ∀ ∈E 1 ˆ T 3: Compute the rank k approximation of 2 LDL and ˆ ˆ ˆ T − its svd as UkΣkUk . Figure 4. L1 approximation to the L0 norm 1 ˆ ˆ ˆ 2 4: Output: X = UkΣk . that only a small fraction of the measurements are arbitrarily (r) between nodes that obtain pairwise distance measure- 1 corrupted and hence we want a configuration of points with the k ments satisfies r O 1 where α fraction of the minimum number of distance measurements being corrupted ≥ 1−α by bias errors. Thus in the noiseless case (∆ = 0), we have measurements have bias errors,  the proposed optimization the following, routine recovers the node locations exactly modulo rigid transformations with high probability. find (D,B) Proof: Standard matrix decomposition into low-rank s.t. D˜ ij = Dij + Bij for all observed distances, and sparse components can be achieved only when the D is of rank k, B is sparse. low-rank matrix is also not sparse. For this, the singular This problem is non-convex and NP-hard in general for arbi- vectors of the matrix have to be well spread out for which trary low-rank matrices. This is because, the rank constraint on they need to satisfy certain conditions known as incoherence D essentially says that the L0-norm of the singular values of D [15] conditions. In our case, as the singular vectors are should be equal to k which is a highly non-convex constraint strongly related to the node location vectors, this condition is and NP-hard to solve. Similarly, the sparsity constraint on B equivalent to requiring that the node locations be well spread would require us to minimize the L0-norm of the matrix B out which is true for a random node placement. Further, if which is also NP-hard. Hence in order to have a tractable we have to reconstruct every row of the low-rank matrix, optimization problem, we will look for the closest convex then we need to observe at least some fraction of the entries approximations to these two constraints that is easier to solve. in that row that are uncorrupted. This places a lower bound The smallest convex ball approximation to the L0-norm is the on the number of neighbors each node has and hence a L1-norm. An example of this is shown in Fig. 4. The L0-norm restriction on the radius of connectivity which would be a corresponds to the points where the lines intersect the two axes function of the fraction of corrupted entries. Thus the proof since one of the coordinates is zero and hence these form the ingredients involve showing that the EDM is incoherent with contours of this norm. In solving the optimization problem, the high probability for a random node placement and obtain a constraints on the observed distance measurements essentially lower bound on the radius of connectivity. manifest as a plane in this dimension or in general a convex surface (see Algorithm 1). Thus the solution is the point where We will first show that the EDM is incoherent with high the smallest L1-ball touches this surface. Similarly, a convex probability for a random placement of nodes. The proof is approximation for the rank constraint, is the L1-norm of the modeled closely on the lines of the results of Oh et. al. singular values of D known as the nuclear norm. Thus we will [16] with slight changes needed to account for the modified use the nuclear norm and the L1-norm as surrogates for rank incoherence definitions of Chen et. al. [15]. The restrictions and sparsity respectively. Algorithm 1 details the optimization on the number of observed corrupted entries in each row problem that we will solve. This problem of decomposing a would impose a lower bound on the radius of connectivity matrix into a low-rank and a sparse matrix has been of recent between the nodes. Our proofs here are shown to hold true interest in the literature [15] and we use existing results to in expectation and these can be converted to high probability provide theoretical guarantees. We will focus on the case when results using concentration inequalities. ∆ = 0. Given the true locations X and the estimated locations Xˆ, III.RESULTS we will use the distortion metric LXXT L LXˆXˆ T L , to || − || characterize how close the estimated relative node geometry The following theorem summarizes our result. is with respect to the true node geometry. The norm can be Theorem 1. For a random placement of nodes in [ 1, 1]k, taken to be the Frobenius norm for simplicity. Since matrix − ignoring border effects, as long as the radius of connectivity recovery results would characterize the error in recovering the distance matrix D, the following lemma would be useful, Lemma 1. LXXT L LXˆXˆ T L D Dˆ . || − || ≤ || − || and the fraction α for which the theorem holds. The following Proof: lemma states the incoherence of D. 1 D is (16k2(k + 2), k + 2) incoherent with high L(XXT XˆXˆ T )L L(XXT + Dˆ)L Lemma 2. || − || ≤ || 2 || probability. 1 T + L( Dˆ XˆXˆ )L , Proof: Appendix. || −2 − || 1 We need a bound on the maximum singular = D Dˆ 2|| − || value of the symmetric bias matrix B. We have ||Bz||2 0 1 σ (B) = max 2 . Let ∆ be the maximum + LDLˆ LXˆXˆ T L . max z ||z|| || − 2 − || value of the bias.q Using the above equation we get that,√ 02 0 N ˆ ˆ T σmax(B) √N∆ ηq∆ . η can thus be chosen as Note that LXX L is the best rank k approximation to ≤ ≤ q 1 LDLˆ . Thus if S is any other rank k matrix, we have that to satisfy one of the conditions of Theorem 2. − 2 1 LDLˆ S 1 LDLˆ LXˆXˆ T L . Choosing S to || − 2 − || ≥ || − 2 − || be 1 LDL we get that, We need to find the number of corrupted and unobserved − 2 entries (q) in each row. Ignoring border effects, the average L(XXT XˆXˆ T )L D Dˆ . number of neighbors of a node in k-dimensional Euclidean k || − || ≤ || − || π 2 Γ( k +1) k 2 space is given by κr N, where κ = 2k . Assuming that α Given the above lemma, we can now concentrate on bound- fraction of the measurements are corrupted by NLOS noise, we ing the error D Dˆ . We will use the following result of || − || N×N have the average number of unobserved entries in every row Chen et. al. [15]. Consider matrices R and S such k R and the entries with errors given by E(q) = N(1 (1 α)κr ). ∈ − − that R is low rank and S is sparse (the exact conditions for low One can show that q is concentrated around its mean and can rank and sparsity will be given in the theorem). Given a subset use existing results from literature [17] to bound this value ˜ Ω of the entries of R = R+S, we are interested in recovering with high probability. However we omit the details for sake R and S. Assume that R and S are symmetric. R has a SVD T 0 of brevity. Therefore, for the condition in the theorem to be decomposition UΣU . R is said to be (µ, k ) incoherent if satisfied, we need the following. the following conditions hold for some k0 1, ..., N and N ∈ { } µ 1, ..., 0 . 0 ∈ { k } µk q q 1 0 2 + η , • rank(R) = k . N N ≤ 2 r r µk0   • max U T e . i|| i|| ≤ N µ(k + 2)N(1 (1 α)κrk) 1 1 T µkq0 − − 2 + . • UU ∞ 2 , N √q ≤ 2 || || ≤ N r   1 where ei are the standardq vectors. Consider the following k For large values of N we get that, r O 1 . optimization problem, ≥ 1−α    (R,ˆ Sˆ) = arg min R ∗ + λ S || || || ||1 IV. VALIDATION s.t. (R + S) = R˜ (i, j) Ω. ij ij ∀ ∈ In this section we provide simulation results and validation Let q be the sum of the maximum number of unobserved on experimental data to evaluate the performance of the entries and the entries with errors (i.e. S = 0) in any row of proposed algorithm based on robust matrix decomposition. ij 6 R˜. Let σmax(S) ηq S ∞, where σmax is the maximum ≤ || || 0 A. Simulations singular value. Let β = 2 µk q . The following theorem N For simulations, we have 50 nodes uniform randomly placed holds. q in a grid [ 1, 1]2. The NLOS bias in the measurements is 0 0 − 1 µk 1−β µk taken to be a uniform random variable on a support of [0, 6] Theorem 2. [15] For λ 1−2β N 2 , ηq N 2 , there ∈ − and the gaussian noise in the measurements is taken to have exists a constant c independent q of N, µ and qk0 such that a standard deviation of 0.02m. Fig. 5 shows the results for with probability 1 cN −10, the optimization recovers (R,S) the case where nodes that are within a radius of 1.5m of each exactly if − other obtain distance measurements. The figure is a plot of 0 T T µk q q 1 ||LXX L−LXˆ Xˆ L||2 2 + η . the normalized error T as a function of the N N ≤ 2 ||LXX L||2 r  r  fraction of NLOS measurements. The plots are obtained after In order to apply the above theorem to our problem setting, averaging over multiple iterations. One can see that the errors we need to show that D satisfies the incoherence properties are quite large even when the fraction of NLOS measurements and identify parameters and thereby conditions on the radius r are small. The theoretical lower bound on the radius itself turns 1.4 10 Network with r = 1.5m Actual Node Location Fully connected network Estimated Node Location 8 1.2

6

1

4

0.8 2

0.6 0 Location (m)

ï ï2

0.4 Y Normalized localization error

ï4 0.2

ï6

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 _ (Proportion of NLOS readings) ï8

ï10 ï10 ï8 ï6 ï4 ï2 0 2 4 6 8 10 Figure 5. Localization error as a function of the fraction of NLOS XïLocation (m) measurements (α) for a network with r = 1.5m and for a fully connected network. (a) Location estimate before rotation.

10 Actual Node Location Normalized localization error for the experimental data set Estimated Node Location 0.4 8 Anchor Location

0.35 6

4 0.3

2

0.25

0 Location (m) ï 0.2 Y ï2

0.15 ï4 Normalized localization error

0.1 ï6

ï8 ï10 ï8 ï6 ï4 ï2 0 2 4 6 8 10 0.05 1 2 3 4 5 6 7 8 9 10 XïLocation (m) _ (Proportion of NLOS readings) (b) Location estimate after rotation.

Figure 6. Normalized localization error for the experimental data set as a Figure 7. Node location estimates for the experimental data set. (a) Without function of the fraction of corrupted measurements. using anchors. Error per node = 1.91m. (b) After rotation by fixing four of the nodes as anchors. Error per node = 1.47m. out to be quite large for this case. Even in the LOS case, we plotted as function of this fraction in Fig. 6. The true and carried out simulations for a large network of nodes using fast estimated node locations for the experimental data set (all matrix completion solvers and saw that the localization was the measurements are corrupted in this case) is shown in not accurate unless the connectivity radius was quite large. Fig. 7(b). Four nodes were designated as anchors (known Since we are looking at local measurements, our observations node locations) and the other node locations were rotated to are largely biased towards the smaller entries of the low-rank match the locations of the anchors in order to have a fair distance matrix which seems to be adversely affecting the comparison with the true node locations. All the node locations estimation process. Fig. 5 also shows the same plot for a fully are centered to the origin. The average node error is 1.47m. connected network, i.e. all the pairwise distances are observed This can be contrasted with the error of 1.26m reported by and the errors are only due to NLOS propagation. One can see Patwari et. al. [18]. However they obtained the node locations that the algorithm can tolerate up to 30% of the measurements after subtracting out the NLOS bias in the measurements and being NLOS. using one of the best LOS localization algorithm. We can see that our algorithm is quite robust since our errors are not too B. Experimental results far off given that we work on the NLOS corrupted data. Thus The algorithm was applied to a data set [18] obtained the robust matrix decomposition approach seems to handle the from pairwise measurements in a 44 node network. The NLOS errors quite well. experiments were conducted in an indoor office environment by simulating 44 node locations using a transmitter and a V. CONCLUSION receiver and obtaining pairwise time-of-arrival measurements. In this work, we considered the problem of NLOS local- The environment had a lot of scatterers and nearly all the ization and proposed a matrix decomposition framework for distance estimates have strong NLOS biases in them. In order the problem. Given that the problem is NP-complete in the to validate the performance as a function of the fraction of general case, we showed that the problem can be solved in NLOS measurements, some fraction of the measured distances polynomial time in the special case of fixed radius random were replaced by the true distances and the performance is geometric graphs. We applied the algorithm to a real world sensor measurement dataset and showed that the algorithm is APPENDIX robust to NLOS errors. However, we saw from simulations A. Proof of Lemma 2 [16] that the connectivity radius needs to be quite large in order to T 2 T 1 x1 x1 x˜1 obtain a good estimate of the node locations. This seems to T || ||2 T 1 x x2 x˜ be partially due to the observed entries being largely skewed Let X˜ =  2 || ||  =  2  RN×k+2 ...... ∈ towards the smaller entries of the distance matrix. One way of T 2 T  1 x xN   x˜  tackling this could be to introduce heterogenous nodes in the  N || ||   N  0 0T 1    network that have a larger communication radius and hence k+2×k+2 and Y = 0 2Ik 0 R . We have the we also get to observe some fraction of the larger distances.  1 −0T 0  ∈ Exploring these possibilities is part of future work. relation, D = XY˜ X˜ T . For a random placement of nodes we have that rank (X˜) is k + 2 with high probability and hence REFERENCES the rank of D is also k + 2 with high probability.

˜ N×k+2 T [1] I. Akyildiz, W. Su, Y. Sankarasubramaniam, and E. Cayirci, “A survey We can write X = VA, where V R , V V = I k+2×k+2 ∈ on sensor networks,” Communications Magazine, IEEE, vol. 40, no. 8, and A R (e.g. using QR factorization). Thus we ∈ pp. 102–114, 2002. can write D = V AY AT V T . Let the SVD of D be given as [2] G. Crippen and T. Havel, Distance geometry and molecular confor- T N×k+2 T mation D = UΣU , U R and U U = I. Hence U can be . Research Studies Press, Taunton, Somerset, England, 1988, ∈ k+2×k+2 vol. 74. expressed as U = VQ for some Q R such that ∈ [3] S. Roweis and L. Saul, “Nonlinear by locally QT Q = I. linear embedding,” Science, vol. 290, no. 5500, 2000. [4] J. Aspnes, D. Goldenberg, and Y. Yang, “On the computational com- T T u1 v1 plexity of sensor network localization,” Algorithmic Aspects of Wireless uT vT Sensor Networks, pp. 32–44, 2004. 2 2 T T Let U =   and V =  . We have ui = vi Q. [5] P. Biswas and Y. Ye, “Semidefinite programming for ad hoc wireless ...... sensor network localization,” in IPSN 2004. ACM, 2004, pp. 46–54.  uT   vT   N   N  [6] A. Javanmard and A. Montanari, “Localization from incomplete noisy Thus     distance measurements,” Arxiv preprint arXiv:1103.1417. T maxi U ei = maxi ui = maxi vi . [7] J. Aspnes, T. Eren, D. Goldenberg, A. Morse, W. Whiteley, Y. Yang, || || || || || || B. Anderson, and P. Belhumeur, “A theory of network localization,” We have V = XA˜ −1. Thus vT =x ˜T A−1. Therefore, Mobile Computing, IEEE Transactions on, vol. 5, no. 12, pp. 1663– i i 1678, 2006. v (σ (AT ))−1 x˜ , [8] R. Z. T.J. Christopher Ward, “Protein modeling with blue gene,” i min i || || ≤ T −1|| || http://www.ibm.com/developerworks/ linux/library/l-bluegene/. (σmin(A )) k(k + 2). [9] J. Dattorro, Convex optimization & Euclidean distance geometry. Me- ≤ T T T boo Publishing USA, 2005. We haveσmin(A ) = λmin(A Ap). Note that X˜ X˜ = [10] I. Guvenc, C. Chong, and F. Watanabe, “Nlos identification and miti- AT A. Thus we get, gation for uwb localization systems,” in Wireless Communications and p Networking Conference, 2007. WCNC 2007. IEEE. IEEE, 2007, pp. T 2 N 1 xi xi 1571–1576. ˜ T ˜ T || || 2 X X = xi xixi xi xi [11] V. Ekambaram and K. Ramchandran, “Distributed high accuracy peer-to-  2 T 2 || ||4  IEEE GLOBE- i=1 x x x x peer localization in mobile multipath environments,” in X || i|| i || i|| || i|| COM 2010,, pp. 1–5.  T Nk  N 0 3 [12] C. Seow and S. Tan, “Non-line-of-sight localization in multipath envi- N Mobile Computing, IEEE Transactions on 0 I 0 ronments,” , vol. 7, no. 5, pp.  3 k  . 647–660, 2008. → Nk T k k(k−1) 3 0 N 5 + 9 [13] S. Venkatesh and R. Buehrer, “A linear programming approach to   Proceedings of the 5th nlos error mitigation in sensor networks,” in  T N  international conference on Information processing in sensor networks. One can easily verify that λmin(A A) = 3 with high ACM, 2006, pp. 301–308. probability for any value of k. Therefore, [14] A. Alfakih, A. Khandani, and H. Wolkowicz, “Solving euclidean distance matrix completion problems via semidefinite programming,” 4k(k + 2) Computational optimization and applications, vol. 12, no. 1, pp. 13–30, vi . 1999. || || ≤ r n [15] Y. Chen, A. Jalali, S. Sanghavi, and C. Caramanis, “Low-rank matrix T Let us now look at UU ∞. recovery from errors and erasures,” Arxiv preprint arXiv:1104.0354, || || 2011. T T UU ∞ = maxij u uj , [16] A. Montanari and S. Oh, “On positioning via distributed matrix com- i || || | T | pletion,” in Sensor Array and Multichannel Signal Processing Workshop = maxi ui ui , (SAM), 2010 IEEE. IEEE, pp. 197–200. | | = max v 2, [17] M. Appel and R. Russo, “The minimum vertex degree of a graph on i|| i|| uniform points in [0, 1] d,” Advances in Applied Probability, pp. 582– 16k2(k + 2)2 594, 1997. 2 . [18] N. Patwari, J. Ash, S. Kyperountas, A. Hero III, R. Moses, and ≤ r N N. Correal, “Locating the nodes: cooperative localization in wireless Thus by choosing µ = 16k2(k + 2) we get the result. sensor networks,” Signal Processing Magazine, IEEE, vol. 22, no. 4, pp. 54–69, 2005.