
Non-Line-of-Sight Localization Using Low-rank + Sparse Matrix Decomposition Venkatesan. N. Ekambaram and Kannan Ramchandran Department of EECS, University of California, Berkeley Email: venkyne, kannanr @eecs.berkeley.edu f g sensors Abstract—We consider the problem of estimating the locations of a set of points in a k-dimensional euclidean space given a subset NLOS of the pairwise distance measurements between the points. We reflections focus on the case when some fraction of these measurements can be arbitrarily corrupted by large additive noise. This is motivated distance/angle by applications like sensor networks, molecular conformation and measurement manifold learning where the measurement process can induce 26 CHAPTER 1. OVERVIEW large bias errors in some fraction of the distance measurements due to physical effects like multipath, spin-diffusion etc. Given the NP-completeness of the problem, we propose a convex relaxation (a) Sensor Localization. (b) Molecular conformation. that involves decomposing the partially observed matrix of distance measurements into low-rank and sparse components, wherein the low-rank component corresponds to the Euclidean Distance Matrix and the sparse component is a matrix of biases. Using recent results from the literature, we show that this convex relaxation yields the exact solution for the class of fixed radius random geometric graphs. We evaluate the performance of the algorithm on an experimental data set obtained from a network of 44 nodes in an indoor environment and show that the algorithm is robust to non-line-of-sight bias errors. Keywords: Non-Line-of-Sight localization, robust matrix decomposition. (c) Manifold Learning. I. INTRODUCTION The problem of obtaining the locations of a set of points Figure 1. (a) Sensors placed in a field for monitoring the region. (b) Tryptophan,Figure one 5:ofSwiss the20 roll standardfrom Weinberger amino & Saul acids [374 [8].]. The (c) problem (1) A of high manifold dimensional given pairwise distances between the points is a topic of “swiss roll”learning, data set illustrated that spans for N = a800 lower datadimensional points sampled from hypersurface a “Swiss roll” [9].1 . (2) Data significant research interest. The problem has applications in points withinAdiscretizedmanifoldisrevealedbyconnectingeachdatap a local radius are connected. (3) The lowerointdimensional and its k =6 repre- nearest neighbors 2 . An unsupervised learning algorithm unfolds the Swiss a broad spectrum of areas such as sensor networks, molecular sentation ofroll the while high preserving dimensional the local geometry surface of nearby preserving data points the3 relative. Finally, geometry the of the points. data points are projected onto the two dimensional subspace that maximizes biology, data analysis, manifold learning etc. In sensor their variance, yielding a faithful embedding of the original manifold 4 . networks, the locations of different sensor nodes need to be estimated, given the distance measurements between nodes focusing on localization tailored to the problem of interest. that are within some communication radius of each other [1] The problem is shown to be NP-complete [4] in the general (Fig. 1(a)). The structure of a protein molecule is determined case wherein one is provided with an arbitrary subset of by estimating the distances between the component atoms of the pairwise distance measurements and is asked to find the molecule using techniques such as NMR spectroscopy [2] a valid configuration of points satisfying these distance (Fig. 1(b)). Many applications that involve processing massive measurements. This is hard even when one has the side- data sets in high dimensions require efficient representations information that there is a unique set of points that satisfy of the data in a low dimensional space. Most of these data the given distances. Hence, most of the work in the literature sets tend to span a low dimensional hypersurface in the focus on developing efficient localization algorithms with higher dimensional space. Pairwise proximity measurements provable theoretical guarantees for specific node geometries. between the data points could be used to obtain an efficient This is when the distance measurements are either exact or representation of the data in lower dimensions preserving the slightly perturbed, which is the case for line-of-sight (LOS) relative conformation of the points [3] (Fig. 1(c)). localization [5], [6]. Existing theoretical results [7], [6] have shown that LOS localization can be achieved in polynomial Given the range of applications, significant research work time for random geometric graphs. is devoted in the literature dealing with theory and algorithms This project is supported in part by AFOSR grant FA9550-10-1-0567 We consider a generalized version of the original LOS x2 2 2 0 x1 x2 x1 x3 2 || − || || − ||2 x1 D = x1 x2 0 x2 x3 ||x − x ||2 x x 2 || −0 || ˜ || 1 − 3|| || 2 − 3|| x3 X = LX x˜2 2 2 2 2 2 2 2 T T x1 x1 x1 x1 x2 x3 x1 x1 x2 x1 x3 || ||2 || ||2 || ||2 || ||2 || ||2 || ||2 || T || 2 T x˜1 = x2 x2 x2 + x1 x2 x3 2 x2 x1 x2 x2 x3 3 || ||2 || ||2 || ||2 || ||2 || ||2 || ||2 − T || T || 2 1 x3 x3 x3 x1 x2 x3 x3 x1 x3 x2 x3 x˜ = x x || || || || || || || || || || || || || || i i − 3 j x˜3 x 2 1 x 2 xT x xT x j=1 1 1 1 2 1 3 || ||2 2 2 2 || T || 2 T = x2 111 + 1 x1 x2 x3 2 x2 x1 x2 x2 x3 ||x ||2 1 || || || || || || − xT x ||xT x|| x 2 || 3|| 3 1 3 2 || 3|| = vecdiag (XXT ) 1T + 1vecdiag(XXT )T 2XXT − !"#$%&"'()"*+%(-1/,-(-/&2%'$*,$-$#% !"#$%&"'()"*+%'$*,$-$#%(,%,.$%"-/0/*% column vector of diagonal entries Figure 2. Centering the node locations. Figure 3. Relating the distance matrix and the node locations. localization problem, wherein we assume that some fraction of the distance measurements can have large bias errors. This problem is traditionally known as non-line-of-sight (NLOS) N×N Let D R be the matrix of all possible pairwise squared 2 localization. The motivation for this problem setup arises distances, i.e. D = x x 2, where xT is the ith row of X. ij jj i− jjj i from real world physical processes that cause bias errors Let D^ be the matrix of distance measurements that is partially in the measurements. For example, multipath interference observed. We will assume the following measurement model, causes huge bias errors in the distance measurements obtained between nodes in a sensor network or from the GPS satellites D^ = D + B + N (i; j) x x r; ij ij ij ij 8 j jj i − jjj ≤ in vehicular networks [11]. The spin-diffusion phenomenon in where B is the bias in the measurement due to NLOS NMR spectroscopy causes apparent shrinkage of the distance ij errors and N is the thermal noise. B is taken to be the measurements in protein molecular conformation applications. ij matrix of biases. We will assume that only some fraction (α) Outliers in the data can cause large bias errors in applications of the entries B ; (i; j) x x r are non-zero. like manifold learning. Thus there has been significant recent f ij j jj i − jjj ≤ g Further we will also assume that the thermal noise is a small interest in developing algorithms that tackle NLOS errors perturbation that is bounded by a quantity ∆. [11], [?]. However, there are no theoretical guarantees for this problem to the best of our knowledge. We need to relate the matrix D and the node locations There is some existing work in the literature that formulates X. Clearly, D is invariant to rigid transformations of X, LOS localization as a matrix completion problem [14]. The i.e. rotations, translations and reflections of the relative node problem boils down to completing a low-rank matrix of placements. Hence we need to define a metric for comparing squared distances known as the Euclidean Distance Matrix node location estimates that is invariant to these rigid trans- (EDM) and there are efficient convex relaxations that can be formations. The following equation relates the distance matrix T T used for matrix completion. Our contribution in this work is D and the node locations X, D = vecdiag(XX )1 + 1vecdiag(XXT )T 2XXT ; where the vecdiag operation the following: − creates a column vector of the input matrix by stacking the (a) We formulate the NLOS localization problem as one diagonal entries and 1 is a vector of ones (see Fig. 3). We of matrix decomposition wherein we are interested in will now derive one possible set of node locations given a decomposing the partially observed matrix of corrupted distance matrix D. Since D is invariant to translations, let pairwise squared distances into a low rank EDM and a us normalize the node locations so that they are centered at sparse matrix of bias errors. 1 T N×N the origin. Define L = IN 11 R . Note that (b) Using existing results from the matrix decomposition − N 2 X~ = LX literature, we show that the proposed relaxation will is a set of node locations that are centered at the 1 achieve exact localization for a random geometric graph origin (see Fig 2). Since lies in the left and right null spaces L D with radius greater than a threshold that is a function of of , by multiplying the expression for on both sides by L, we get that 1 LDL = LX(LX)T . Thus given a fully the fraction of LOS measurements. − 2 observed noiseless distance matrix D, one can obtain a set of (c) We evaluate the performance of the algorithm on a real- 1 X~ = UΣ 2 world dataset obtained from a static network of 44 nodes node locations centered at the origin by choosing , where 1 LDL has the singular value decomposition (SVD), in an indoor environment.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-