On the Coherence Properties of Random Euclidean Distance Matrices
Total Page:16
File Type:pdf, Size:1020Kb
On the Coherence Properties of Random Euclidean Distance Matrices Dionysios S. Kalogerias† and Athina P. Petropulu Department of Electrical and Computer Engineering, Rutgers, The State University of New Jersey {d.kalogerias, athinap}@rutgers.edu Abstract—In the present paper we focus on the coherence we propose. More details on the above issues are provided in properties of general random Euclidean distance matrices, which Section IV. are very closely related to the respective matrix completion The paper is organized as follows. In Section II, we provide problem. This problem is of great interest in several applications such as node localization in sensor networks with limited connec- a brief introduction to the problem of matrix completion, as tivity. Our results can directly provide the sufficient conditions presented and analyzed in [7]. In Section III, we present our under which an EDM can be successfully recovered with high main results. Finally, in Section IV, we discuss the connection probability from a limited number of measurements. of our results with the respective ones presented in [1], [2] Index Terms—Random Euclidean Distance Matrices, Matrix and [3]. Completion, Limited Connectivity, Subspace Coherence II. LOW RANK MATRIX COMPLETION I. INTRODUCTION × Consider a generic matrix M RK L of rank r, whose Rd Considering N points (or nodes) lying in with respective compact Singular Value Decomposition∈ (SVD) is given by Rd N+ T t positions pi ,i 1, 2,...,N , the (i, j)-th entry N M = UΣV ∈N+ σi (M) uivi and with column and ∈ ∈ ≡{ } ≡ i r of a Euclidean distance matrix (EDM) is defined as row subspaces denoted as U and V respectively, spanned by KP×1 L×1 2 + + the sets u R and v R , respec- ∆ (i, j) , p p R , (i, j) N N . (1) i N+ i N+ i j 2 + N N ∈ i∈ r ∈ i∈ r − ∈ ∀ ∈ × tively. n o n o K×L EDMs appear in a large variety of engineering applications, Let (M) R denote an entrywise sampling of M. such as sensor network positioning and localization [1], [2], In all theP analysis∈ that follows, we will adopt the theoretical [3], distributed beamforming problems that rely on second framework presented in [7], according to which one hopes to order statistics of the internode channels [4], or molecular reconstruct M from (M) by solving the convex program conformation [5], where, using NMR spectroscopy techniques, P the distances of the atoms forming the protein molecule minimize X k k∗ (2) are estimated, leading to the determination of its structure. subject to X (i, j)= M (i, j) , (i, j) Ω, ∀ ∈ Recently, significant attention has been paid to the problem where the set Ω contains all matrix coordinates corresponding of EDM completion from partial distance measurements in to the observed entries of M (contained in (M)) and where sensor networks, which corresponds to scenarios with limited X represents the nuclear norm of X. P connectivity among the network nodes (see, e.g., [1]). It can be ∗ k Alsok in [7], the authors introduce the notion of subspace shown [1] that the rank of an EDM is less or equal than d +2 coherence, in order to derive specific conditions under which arXiv:1303.0594v2 [cs.IT] 11 May 2013 and hence, for a large number of nodes, such a matrix is always the solution of (2) coincides with M. The formal definition of low-rank. Therefore, under certain conditions, matrix com- of subspace coherence follows, in a slightly more expanded pletion can be used to recover the missing entries of the matrix. form compared to the original definition stated in [7]. In this paper, we focus on those properties of general random EDMs, which can provide sufficient conditions, under which Definition 1. [7] Let U Rr RN be a subspace spanned the EDM completion problem can be successfully solved with ≡ ⊆ RN×1 by the set of orthonormal vectors ui . Also, ∈N+ high probability. ∈ i r , n RN×r o , define the matrix U [u1 u2 ... ur] and let PU Relation to the literature - The special case where the node T N×N ∈ UU R be the orthogonal projection onto U. Then the coordinates are drawn from [ 1, 1] has been studied in [1], ∈ U − coherence of U with respect to the standard basis ei ∈N+ [3], [6]. In our paper, in addition to studying the more general { }i N case, we point out some issues in the respective proofs in is defined as [3], [6], which result in incorrect results. The correct results , N 2 µ (U) sup PU ei 2 can be obtained as a special case of the general results that r N+ k k i∈ N N † Corresponding author. sup (U (i, k))2 . (3) This work is supported by the National Science Foundation under Grant ≡ r ∈N+ i N ∈N+ CNS-1239188. kXr Additionally, the following crucial assumptions regarding a t [0, 1] and a probability of failure γ [0, 1]. Then, as the subspaces U and V are of particular importance [7]. long∈ as ∈ 2θ (log (d + 2) log (γ)) N , (6) R −2 A0 max µ (U) ,µ (V ) µ0 ++. ≥ (1 t) { }≤ ∈ − the associated Euclidean distance matrix ∆ with worst case t r R A1 ∈N+ uivi µ1 , µ1 ++. i r ∞ ≤ KL ∈ rank d +2 obeys the assumptions A0 and A1 with r Indeed, ifP the constants µ and µ associated with the θ θ 0 1 µ0 = and µ1 = , (7) singular vectors of a matrix M are known, the following t (d + 2) t√d +2 theorem holds. with probability of success at least 1 γ, where the constant × − Theorem 1. [7] Let M RK L be a matrix of rank r obeying θ (that is, independent of N) is defined as ∈ A0 and A1 and set N , max K,L . Suppose we observe 2 2 4 { } 1+ dc + d c m entries of M with matrix coordinates sampled uniformly at θ , ∗ , (8) λ random. Then there exist constants C, c such that if ∗ with λ , min λ , λ , λ ,m , and where λ , λ , λ are the 2 1/2 1/4 1 2 3 2 1 2 3 m C max µ ,µ µ ,µ N Nrβ log N (4) real and positive{ solutions to the} cubic equation ≥ 1 0 1 0 n o i for some β > 2, the minimizer to the program (2) is unique αiλ =0, (9) −β and equal to M with probability at least 1 cN . For r i∈N3 −1 1/5 − ≤ X µ N this estimate can be improved to 0 with 6/5 m Cµ0N rβ log N, (5) , 3 2 ≥ α0 m4m2 m2 m3 d, (10) − − with the same probability of success. α , m3d2+ 1 − 2 Of course, the lower the rank of M, the less the required + m3 + m2 + m2 m m m d m , (11) number of observations for achieving exact reconstruction. 2 3 2 − 4 − 4 2 − 2 , 2 2 2 Regarding the rank the EDM ∆ at hand, one can easily prove α2 m2d + m4 m2 d + m2 + 1 and (12) the following lemma [1]. − α , 1. (13) × 3 Lemma 1. Let ∆ RN N be an EDM corresponding to the − distances N of points∈ (nodes) in Rd. Then, rank (∆) d+2. Before we proceed with the proof of the theorem, let us ≤ state a well known result from random matrix theory, the Thus, in the most common cases of interest, that is, when matrix Chernoff bound (exponential form - see [8], Remark d equals 2 or 3 and for sufficiently large number of nodes N, 5.3), which will come in handy in the last part of the proof. rank (∆) N and consequently the problem of recovering ∆ from a restricted≪ number of observations is of great interest. Lemma 2. [8] Consider a finite sequence of N Hermitian and statistically independent random matrices × III. THE COHERENCE OF RANDOM EDMS F CK K satisfying i N+ ∈ i∈ N According to Theorem 1, the bound of the minimum number n o of observations required for the exact recovery of ∆ from F 0 and λ (F ) R, i N+ (14) i max i ≤ ∀ ∈ N (∆), involves the parameters µ and µ . Next, we derive a 0 1 and define the constants generalP result, which provides estimates of these parameters in a probabilistic fashion. ξ , λ E F . (15) Theorem 2. Consider N points (nodes) lying almost surely in min(max) min(max) i + { } , d i∈N a d-dimensional convex polytope defined as d [a,b] with XN , tH N+ a < 0 < b and let pi [xi1 xi2 ... xid] d,i N , + Define Fs ∈N+ Fi. Then, it is true that ∈ H ∈ N i N denote the position of node i, where xij 1, j d represents its respective -th position coordinate.∈ H Assume∈ that j P ξ each x , (i, j) N+ N+ is drawn independently and P λ (F ) tξ K exp min (1 t)2 , (16) ij ∈ N × d { min s ≤ min}≤ − 2R − identically according to an arbitrary but atomless probability measure P with finite moments up to order 4. Define t [0, 1], and ∀ ∈ tξmax , E k k e R mk xij x dP < , P λ (F ) tξ K , t e. ≡ ˆ ∞ max s max t n o { ≥ }≤ ∀ ≥ N+ N+ N+ k 4 , (i, j) N d and let, without any loss of Proof of Theorem 2: In order to make it more tractable, generality,∈ ∀m 0∈. Also,× define c , max a , b and pick we will divide the proof into the following subsections. 1 ≡ {| | | |} × A. Characterization of the SVD of ∆ R(d+2) (d+2) is symmetric by definition, the finite dimen- sional spectral theorem implies that it is diagonalizable with To begin with, observe that ∆ admits the rank-1 decompo- T T sition eigendecomposition given by ADA = QΛQ , where R(d+2)×(d+2) T T t t t Q with QQ = Q Q I(d+2) and ∆ = 1 × p 2 x x + p1 × , (17) ∈ (d+2)×(d+2) ≡ N 1 − i i N 1 Λ R is diagonal, containing the eigenvaluese of ∈N+ ∈ T iXd ADA .