Quick viewing(Text Mode)

Generalized Euclidean Distance Matrices (GEDM)

Generalized Euclidean Distance Matrices (GEDM)

arXiv:2103.03603v2 [math.FA] 19 Aug 2021 hoy icel 9 rvdtesrkn eut If result: in striking For the fields. proved several [9] in Micchelli appear theory, matrices distance Euclidean a h ln,then plane, the ahmtc nt ninSaitclIsiue eh,India; Delhi, Institute, Statistical Indian , Mathematics .Balaji R. matrices distance Euclidean Generalized requivalently, or An Introduction 1. Bangalore,India Bangalore, IISc vectors eateto ahmtc,ITMda,Ceni India; Chennai, Madras, IIT Mathematics, of Department OTC:SiaiGe.Eal shivani.goel.maths@gmail. Email: Goel. Shivani CONTACT: 15A57 CLASSIFICATION AMS matrices. La divisible formula, Infinitely inertia Majorization, Haynsworth , distance Euclidean KEYWORDS us matrices divisible matrices. infinitely distance inequalities constructing significan majorization by many some plication that and show inverse inverse, eigenvalues, we generalized Moore-Penrose about to results techniques, symme extended contain not These new be trices. is can some matrices GEDM By distance A Euclidean EDM. matrix. nonnegative an entry-wise is th introduce an (GEDMs) we is matrices article, GEDM distance this Euclidean generalized In nonnegati called properties. symmetric interesting are (EDM) eral matrices distance Euclidean ABSTRACT 2021 20, August Compiled HISTORY ARTICLE n × x n 1 a x , elmatrix real ..Bapat R.B. , 2 x , . . . , d n ij naEciensae( space Euclidean a in = d D ij b h x [ = ( = n hvn Goel Shivani and − i − k 1) d x ij x n i − sa is ] j − x , 1 det[ x i j − k ulda itnematrix distance Euclidean 2 q x j o all for + 1 i c o all for V, k x h· i ,j i, − , eemnn,seta radius, spectral , com ·i x 1 = ,j i, uhthat such ) x n eeaie Euclidean generalized ing ie ls fmatrices of class wider a j 1 k efial iea ap- an give finally We . lca matrices, placian ticueEM.Each EDMs. include at x , . . . , ulda itnema- distance Euclidean 2 , 1 = ] emtie ihsev- with matrices ve 2 b c > hoeia ttsisand Theoretical eateto Mathematics, of Department n. , . . . , , 0 2 . n , . . . , n tne napproximation in stance, are rculs it unless tric eut on results t EM fteeexist there if (EDM) n (1) ; itntpit in points distinct As a consequence, there exists a unique function

n k 2 f(x)= ck 1+ x x k − k k X=1 q 1 n interpolating a given data y1,...,yn at x ,...,x . Euclidean distance matrices have several interesting properties. For example, we have the following basic result.

Theorem 1.1 (Menger, Schoenberg. [7]). Let D = [dij] be an n n with zero diagonal. Then the following are equivalent. × 1. D is an EDM. n 2. If (x ,...,xn) is such that xi = 0, then dijxixj 0. 1 ′ i=1 i,j ≤ n D 1 3. Let 1 = (1, 1,..., 1)′ R .P Then the borderedP matrix has exactly one ∈ 1′ 0 (simple) positive eigenvalue.  

Symmetric matrices with exactly one simple positive eigenvalue are called elliptic matrices. Elliptic matrices play an important role in Alexandrov inequalities for mixed volumes [3, chapter 5]. Elliptic matrices are useful in obtaining infinitely divisible 1 matrices. If [aij] is an elliptic matrix with all entries positive, then [ r ] is positive aij semidefinite for all r > 0, i.e. [ 1 ] is a infinitely divisible matrix. The main theme of aij [4] is to obtain infinitely divisible matrices via elliptic matrices.

1.1. Objective of the paper In this paper, we investigate the so-called generalized Euclidean distance matrices (GEDM). All Euclidean distance matrices (EDM) are generalized Euclidean distance matrices. If a generalized Euclidean is not an EDM, then it is not symmetric. Despite this fact, we extend many properties of EDMs to GEDMs. For example, we show that all eigenvalues of a GEDM are real and has at most one positive eigenvalue, null space of a GEDM is a subspace of 1⊥, the Moore-Penrose inverse of a GEDM is negative semidefinite on 1⊥ and so on. These results are obtained by using new techniques that circumvent the standard arguments on symmetric matrices.

1.2. Definition of generalized Euclidean distance matrices

We begin with the following observation. Let D = [dij] be an n n Euclidean distance × matrix. In view of (1), if D = [dij] is a EDM, then

i 2 j 2 i j dij = x + x 2 x ,x i, j = 1, 2,...,n. k k k k − h i ∀ Recall that the of an ordered system of vectors v1, . . . , vn in a Eu- clidean space is the matrix [ vi, vj ]. As every positive semidefinite{ matrix} is a Gram h i matrix of some system of vectors in some , it follows that D = [dij] is an EDM if and only if there exists an n n positive semidefinite matrix F = [fij] ×

2 such that

dij = fii + fjj 2fij. (2) − n Let 1 be the vector of all ones in R . Define J := 11′. Then (2) can be rewritten as

D = diag(F )J + J diag(F ) 2F. (3) − Define 1 1 P := I J and Y = [yij] := PDP, (4) − n −2 where I is the n n . As the diagonal entries of D are zero, from equation (2), it can× be verified that

dij = yii + yjj 2yij, − or equivalently,

D = diag(Y )J + J diag(Y ) 2Y. − By an easy verification, it follows that Y is positive semidefinite and Y 1 = 0. Thus, a symmetric matrix D = [dij] is an EDM if and only if there exists a positive semidefinite matrix G = [gij] such that G1 = 0 (equivalently, row sums and column sums of G are zero) and

dij = gii + gjj 2gij . − Several important properties of Euclidean distance matrices depend on this character- ization.

Definition 1.2. We say that an n n real symmetric matrix L is a generalized if L is positive semidefinite× and L1 = 0.

It is easy to note that L is a generalized Laplacian if and only if there exists a positive semidefinite matrix F such that L = P F P , where P is the matrix in (4). Laplacian matrices of connected graphs are examples of generalized Laplacian matrices.

Definition 1.3. Let a and b be any two positive numbers and L be an n n generalized Laplacian matrix. Define ×

2 2 dij = a lii + b ljj 2ablij i, j = 1, 2,...,n. −

We now say that [dij] is a generalized Euclidean distance matrix (GEDM). An easy computation shows that

2 2 dji = b lii + a ljj 2ablij. − Thus, D is symmetric if and only if a = b and in this case, all the diagonal entries of D are zero; hence D will be a Euclidean distance matrix. To illustrate the definition,

3 consider the following example.

Example 1.4. Let

3 1 2 − − L := [lij]= 1 3 2 .  −2 2− 4  − −   Then L is a generalized Laplacian matrix. Set a = 1 and b = 3. For i, j = 1, 2, 3, define

2 2 dij := a lii + b ljj 2ablij − = lii + 9ljj 6lij. − Now,

12 36 51 D := [dij]= 36 12 51  43 43 16    is a GEDM.

We note the following proposition.

Proposition 1.5. An n n matrix D is a GEDM if and only if there exist x1,...,xn in some Euclidean space ×(V, , ) such that h· ·i n i j i j j dij = ax bx , ax bx and x = 0. h − − i j X=1

Proof. Let D = [dij] be a GEDM. Then,

2 2 dij = a lii + b ljj 2ablij, − where L = [lij] is a generalized Laplacian matrix and a, b are some positive numbers. Because L is positive semidefinite and L1 = 0, there exist x1,...,xn in a Euclidean space (V, , ) such that h· ·i n i j j lij = x ,x and x = 0. h i j X=1 Thus,

2 i i 2 j j i j i j i j dij = a x ,x + b x ,x 2ab x ,x = ax bx , ax bx . h i h i− h i h − − i The converse is immediate.

1.3. Preliminaries We fix the notation and mention a few results/definitions that are needed in the sequel.

4 (N1) All matrices considered are real. The of a matrix A is written A′. The n notation 1 will denote the vector of all ones in R , J will be the matrix 11′ and 1⊥ will be the subspace containing all vectors that are orthogonal to 1. As usual n e1,...,en will denote the standard orthonormal vectors in R ; hence the first column of an n n matrix A will be Ae1 and so on. We use P to denote × 1 the orthogonal projection I n J onto 1⊥. (N2) The null space of A is denoted− by null(A) and the column space (range) of A by col(A). As usual, ρ(A) will be the spectral radius of A and the Moore-Penorse inverse of A will be written A†. (N3) Let x := (x1,...,xn)′ and y := (y1,...,yn)′ be any two vectors. Let σ and π be permutations on 1,...,n such that { }

x xσ and yπ1 yπ . σ(1) ≥···≥ n ≥···≥ n We say that x is majorized by y if

n n k k xi = yi and x y (k = 1,...,n 1). σ(j) ≤ π(j) − i i j j X=1 X=1 X=1 X=1 To say that x is majorized by y, we use the notation x y. ≺ (N4) Given an n n matrix A = [aij], we use diag(A) to denote the vector × n n (a11, . . . , ann)′ in R . If (p1,...,pn)′ is a vector in R , we use Diag(p1,...,pn) to denote the with diagonal entries p1,...,pn. (N5) Let H be an n n symmetric matrix and the eigenvalues are λ . . . , λn such that × 1

λ λn. 1 ≥···≥ We now define λ(H) by

λ(H) = (λ1, . . . , λn)′.

(N6) By a theorem of Schur, if H is an n n symmetric matrix, then × diag(H) λ(H). ≺ If A and B are n n symmetric matrices, then a result of Ky Fan states that × λ(A + B) λ(A)+ λ(B). ≺ (see e.g., [11, Theorem 7.14 and 7.15]) (N7) The inertia of a symmetric matrix A is denoted by In(A) = (ν,δ,µ) where ν is the number of negative eigenvalues of A, δ is the nullity of A and µ is the number of positive eigenvalues of A. If A is n n, p Rn, and if × ∈ A p A = , p′ 0   e

5 then the generalized A/A is p A p. If p col(A), then − ′ † ∈ In(A/A) + In(eA) = In(A).

(see [5, Theorem 2]) e e (N8) An n n matrix A is called an M-matrix, if A = ρI S, where S is a × and ρ ρ(S). − (N9) Let A be an n≥ n symmetric matrix and p Rn. If the bordered matrix × ∈ A p p′ 0   has exactly one positive eigenvalue, then

y p⊥ = y′Ay 0. ∈ ⇒ ≤ (see [6, Theorem 2.9])

2. Results

In the sequel, we assume that D is a non-zero GEDM and L is the generalized Laplacian such that

D = a2LJ + b2JL 2abL, (5) − where a and b are fixed positive numberse and eL := Diag(L).

e 2.1. Eigenvalues of a GEDM We shall now show that all eigenvalues of D are real and in fact D is similar to a symmetric matrix.

Theorem 2.1. D is similar to a symmetric matrix and has exactly one positive eigen- value.

Proof. If all the diagonal entries of D are zero, then D is a Euclidean distance matrix and in this case, the result is known. Now assume that D has at least one positive diagonal entry. Let the eigenvalues of L be arranged

α α αn. 1 ≤ 2 ≤···≤

Since L is positive semidefinite and L1 = 0, α1 = 0. Let U be an 1 with first column equal to the unit vector √n 1 and such that

U ′LU = Diag(0, α2,...,αn).

Define

(s1,s2,...,sn)′ := U ′LJUe1.

6 e By an easy verification,

s1 = (L).

Furthermore,

U ′LJUei = 0 i = 2,...,n.

Thus, e

2 2 U ′DU = a U ′LJU + b U ′JLU 2abU ′LU − 2 2 2 2 (a + b )s1 b s2 . . . b sn e 2 e a s2 2abα2 . . . 0 (6) =  . − . . .  ......  2   a sn 0 . . . 2abαn   −    Define b W := Diag( , 1,..., 1). a Now,

2 2 (a + b )s1 abs2 . . . absn abs 2abα . . . 0 1 2 2 W − U ′DUW =  . − . . .  ; (7) . . .. .    absn 0 . . . 2abαn   −    1 hence D is similar to a symmetric matrix. By interlacing theorem, W − U ′DUW has n 1 non-positive eigenvalues. Since trace(D) > 0, we see that − 1 (W − U ′DUW )11 > 0.

1 Hence, W − U ′DUW has at least one positive eigenvalue. So, D has exactly one positive eigenvalue. The proof is now complete.

In the rest of the paper, we shall use W and U to denote the matrices defined in Theorem 2.1. We have the following corollary now.

Corollary 2.2. All the eigenvalues of the bordered matrix

D 1 D := 1′ 0   e are real and D has exactly one positive eigenvalue. Proof. Settinge U 0 W 0 K := , 0 1 0 1    

7 we find that the bordered matrix

D 1 D := 1′ 0   is similar to e

W 1U DUW W 1U 1 Q := K 1DK = − ′ − ′ . (8) − 1 UW 0  ′  e Since U is an orthogonal matrix with first column equal to 1 1, √n

1 a W − U ′1 = Diag( , 1,..., 1)(√ne ) b 1 a = (√n , 0,..., 0)′ (9) b a = √n e . b 1 Put a δ := √n . b

By (7) and (9),

W 1U DUW δe Q = − ′ 1 1 e 0  δ 1′  2 2 (a + b )s1 abs2 . . . absn δ abs2 2abα2 . . . 0 0 (10)  . − . . . .  = ......    absn 0 . . . 2abαn 0   1 −   0 0 . . . 0 0   δ    Define 1 G := Diag(I, ), δ where I is the n n identity matrix. Now, × 1 1 I 0 W − U ′DUW δe1 I 0 G− QG = 1 1 0 δ e′ 0 0    δ 1   δ  (11) W 1U DUW e = − ′ 1 . e 0  1′ 

1 1 As W − U ′DUW is symmetric (see (7)), G− QG is symmetric. Thus, D is similar to a symmetric matrix. Therefore, D has only real eigenvalues. e e

8 We now claim that D has exactly one positive eigenvalue. Let x = (x1,...,xn)′ be orthogonal to e1. Because x1 = 0, from (7), we have e n 1 2 x′W − U ′DUWx = 2ab x αi 0. (12) − i ≤ i X=2 The subspace

:= (x, r) : x′e = 0, r R ∇ { 1 ∈ } of Rn+1 has dimension n. Consider a vector w := (x, r) . By (11) and (12), ′ ∈∇ 1 1 w′G− QGw = x′W − U ′DUWx + re1′ x 1 = x′W − U ′DUWx 0. ≤ 1 Hence, G− QG has at least n non-positive eigenvalues. Because D has exactly one 1 1 positive eigenvalue, G− QG has at least one positive eigenvalue. So, G− QG has ex- 1 actly one positive eigenvalue. Because D and G− QG are similar, D has exactly one positive eigenvalue. The result is proved. e e

2.2. Sign pattern of (ρ(D)I − D)r

Let the eigenvalues of D be δ1,...,δn. Since D is a nonnegative matrix and has exactly one positive eigenvalue, ρ(D) is the only positive eigenvalue of D. Therefore,

γi := ρ(D) δi 0 i = 1,...,n. − ≥ Now, let A be an such that

1 A− DA = Diag(δ1,...,δn).

Define

S := ρ(D)I D. − Then, for any r> 0

r r r 1 S := A Diag(γ1, . . . , γn)A− .

As a consequence of Theorem 2.1, we now have the following result.

Corollary 2.3. If 0

Proof. Fix 0

To illustrate, we give the following example.

9 Example 2.4. Consider the matrix D given in Example 1.4. The eigenvalues of D are approximately

24, 36.1322 and 100.1322. − − Now the matrix S is

88.1322 36 51 S = ρ(D)I D = 36 88−.1322 −51 . −  −43 43 84−.1322  − −   Then

7.8037 3.3378 4.3690 1 − − S 2 = 3.3378 7.8037 4.3690 .  −3.6836 3.6836− 7.2073  − −   1 It is easy to see that S 2 is an M-matrix.

2.3. Null space of a GEDM

The main result about the null space of a GEDM is null(D) 1⊥; so 1 col(D) and this in turn will be useful to investigate the Moore-Penorse inverse⊆ of D∈.

Theorem 2.5. If D is a GEDM, then 1 null(D). ∈ Proof. Following same notation as in Theorem 2.1, we have by equation (6),

2 2 2 2 (a + b )s1 b s2 . . . b sn 2 a s2 2abα2 . . . 0 ∆ := U ′DU =  . − . . .  ......  2   a sn 0 . . . 2abαn   −    2 2 As ∆11 = (a + b )s1 and s1 = trace(L), ∆11 > 0. Let Dx = 0. We claim that 1′x = 0. Since U∆U ′ = D and Dx = 0, we have

∆U ′x = 0. (13)

If v := (s2,s3,...,sn)′ and S := Diag(2abλ2,..., 2abλn), then

∆ b2v ∆= 11 ′ . a2v S  −  Put y := U x. By writing y = (y , y¯) , where y R andy ¯ Rn 1, from (13), we have ′ 1 ′ 1 ∈ ′ ∈ − 2 y1∆11 + b v′y¯ = 0 (14)

2 y1a v = Sy.¯ (15)

10 If possible, let y = 0. Then from (15) we have, 1 6 1 v = 2 Sy.¯ a y1

Thus by equation (14),

b2 y1∆11 + 2 y¯′Sy¯ = 0, a y1 or equivalently,

2 2 b y ∆ + y¯′Sy¯ = 0. (16) 1 11 a2

As S is positive semidefinite,y ¯′Sy¯ is nonnegative. Because ∆11 > 0,

2 2 b y ∆ + y¯′Sy¯ > 0, 1 11 a2 contradicting (16). Hence, y1 = 0. Since Uy = x and y = (0,y2,...,yn)′,

x span Ue ,...,Uen = 1⊥. ∈ { 2 }

So, 1′x = 0. The proof is complete. Corollary 2.6. 1 null(D ). ∈ ′ Proof. As D is also a GEDM, 1 null(D ). ′ ∈ ′ Corollary 2.7. 1 col(D) col(D ). ∈ ∩ ′ Proof. This follows easily since null(D) = col(D′)⊥. Corollary 2.8. 1 D 1 = 0 if and only if there exists f 1 such that Df = 1. ′ † ∈ ⊥ Proof. Suppose 1 D 1 = 0. Let Dx = 1. Since DD 1 = 1, we have x D 1 null(D) ′ † † − † ∈ and because null(D) 1⊥, 1′(x D†)1 = 0. So, (1′x)1 = 0; hence 1′x = 0. Conversely, let Df = 1 and 1 f ⊆= 0. Since,−DD 1 = 1 and null(D) 1 , f D 1 1 . Thus, ′ † ⊆ ⊥ − † ∈ ⊥ 1′D†1 = 0. Corollary 2.9. The following are equivalent.

(a) 1′D†1 = 0. (b) null(D)6 = null(D ) = null(L) 1 . ′ ∩ ⊥ Proof. Assume (a). To prove (b), it suffices to show that

null(D) = null(L) 1⊥. ∩ Let x null(D). By Theorem 2.5, x 1 . Since ∈ ∈ ⊥ 2 2 x′Dx = a x′LJx + b x′JLx (2ab)x′Lx − = 2abx′Lx = 0. − e e 11 Because x Lx = 0 and L is positive semidefinite, Lx = 0. So, x null(L). Thus, ′ ∈

null(D) null(L) 1⊥. ⊆ ∩ Now, let f null(L) 1 . Then, ∈ ∩ ⊥ Df = a2LJf + b2JLf 2abLf − (17) 2 2 2 = b JLf = b 11′Lf = b (1′Lf)1. e e

From Corollary 2.8, we find that 1e′Lf = 0. Thus,e f null(e D). This proves (a) (b). Assume (b). If 1 D 1 = 0, then by Corollary 2.8,∈ there exists f 1 such⇒ that ′ † ∈ ⊥ Df = 1. Therefore, f ′Df = 0 and thise gives f ′Lf = 0. Since L is positive semidefinite, Lf = 0 and therefore by our assumption, f null(D) contradicting Df = 1. Thus (b) (a). The proof is complete. ∈ ⇒ Corollary 2.10. null(D) = null(LJ + JL 2L). − Proof. Put e e E := LJ + JL 2L. −

Let x null(D). Then by Theorem 2.5,e x 1e⊥. So, ∈ ∈ 0= Dx = a2LJx + b2JLx 2abLx − = b2JLx 2abLx (18) e − e 2 = b 11′Lx 2abLx. e −

Because Lx is an element in 1⊥, we get bye (18),

1′Lx =0 and Lx = 0. (19)

On the other hand, we have e

Ex = LJx + JLx 2Lx = JLx 2Lx − − = 11′Lx 2Lx. e e e − By (19), it now follows that Ex = 0. Thus, x null(Ee). So, ∈ null(D) null(E). ⊆ Similar argument leads to null(E) null(D). The proof is complete. ⊆

2.4. Moore-Penrose inverse of a GEDM We now obtain some properties of the Moore-Penrose inverse of D. The following result says that 1′D−1 is invariant for any choice of g-inverse of D.

12 Theorem 2.11. If D is a GEDM, and if D− is a of D, then 1′D−1 = 1D†1.

Proof. As 1 is an element in the column space of D and D′, we have

1′DD† = 1′ and DD†1 = 1.

So,

1′D−1 = 1′D†DD−DD†1.

As DD−D = D, we get

1′D−1 = 1′D†DD†1 = 1′D†1.

Theorem 2.12. 1 D 1 0. ′ † ≥ Proof. In view of equation (11), D is similar to

1 e W − U ′DUW e1 S := , (20) e 0  1′  which is symmetric. We claim the following. n 1 Claim: There exists x R such that W − U ′DUWx = e1. ∈ n Since UW e1 span 1 and 1 is in the column space of D, there exists y R such that Dy = UW∈ e . As {U }and W are non-singular, y = UWx for some x ∈Rn. Thus, 1 ∈ DUWx = UW e1 and the claim is true. Applying inertia formula to S (see (N7)), we have

1 1 In(S) = In(W − U ′DUW )+In( e′ W − U ′D†UW e ). − 1 1 1 By corollary 2.2, D has exactly one positive eigenvalue. Since W − U ′DUW has exactly one positive eigenvalue, by the above formula, e 1 e′ W − U ′D†UW e 0. − 1 1 ≤ From the definition of U and W , we see that

1 e′ W − U ′D†UW e = 1′D†1, − 1 1 − and hence 1 D 1 0. This proves the result. ′ † ≥ Theorem 2.13. P D P is positive semidefinite. − † Proof. We first claim that P D†P is symmetric. Let U be the orthogonal matrix given in Theorem 2.1. Define

fi := Uei i = 2,...,n. (21)

13 Each fi 1 . To complete the proof of the claim, it suffices to show that ∈ ⊥

fi′D†fj = fj′D†fi i, j = 2,...,n.

1 Fix i = j and i, j 2. We know from Theorem 2.1 that W − U ′D†UW is symmetric. Hence,6 ≥

1 1 ei′ W − U ′D†UW ej = ej′ W − U ′D†UW ei. (22)

Since

1 e′ W − = e′ and W ej = ej for all i, j 2, i i ≥ we have

1 ei′ W − U ′D†UW ej = ei′ U ′D†Uej; hence by (21),

1 ei′ W − U ′D†UW ej = fi′D†fj.

By a similar reasoning,

1 ej′ W − U ′D†UW ei = fj′D†fi.

In view of (22),

fi′D†fj = fj′D†fi.

Thus, P D†P is symmetric. Consider the bordered matrix

W 1U D UW e S := − ′ † 1 . e 0  1′  e 1 1 Because 1 col(D), e1 col(W − U ′D†UW ). Since W − U ′DUW is symmetric, 1 ∈ ∈ (W − U ′DUW )† is symmetric as well. Because

1 1 (W − U ′DUW )† = W − U ′D†UW,

S is symmetric. By inertia formula in (N7), we have

1 1 e In( e′ W − U ′DUWe ) + In(W − U ′D†UW ) = In(S). (23) − 1 1 By an easy computation, we see that e

1 1 e′ W − U ′DUWe = 1′D1. 1 1 n

14 So,

1 In( e′ W − U ′DUWe ) = (1, 0, 0). − 1 1 1 Because W − U ′D†UW has exactly one positive eigenvalue, it follows from (23) that S has exactly one positive eigenvalue. By (N9),

1 e x e⊥ = x′W − U ′D†UWx 0. ∈ 1 ⇒ ≤ Specializing

x = ei i = 2,...,n in the above inequality leads to

f ′D†fi 0 i = 2,...,n. i ≤ This proves P D P is positive semidefinite. The proof is complete. − † Corollary 2.14. D† is negative semidefinite on 1⊥. Proof. Let x 1 . Then, x D x = x P D Px; hence by Theorem 2.13, x D x 0. ∈ ⊥ ′ † ′ † ′ † ≤ Theorem 2.15. Let E = LJ + JL 2L. Then, −

1′De†1 > 0e if and only if 1′E†1 > 0.

Proof. Suppose 1′D†1 > 0. If possible, let 1′E†1 = 0. We shall now get a contradic- tion. By Corollary 2.8, there exists f 1 such that ∈ ⊥ Ef = 1. (24)

As f ′Ef = 0, we have f ′Lf = 0; so Lf = 0. We now have

Df = a2LJf + b2JLf 2abLf − (25) 2 2 = b 11′Lf = b (1′Lf)1. e e

Since 1′D†1 > 0, from (25) and Corollarye 2.8, wee find that 1′Lf = 0. Thus, f null(D). By Corollary 2.10, f null(E). This contradicts (24). So, 1 E 1 > 0. ∈ ∈ ′ † By using a similar argument, we get the reverse implication. e

2.5. Generalized Circum Euclidean distance matrix Suppose p1,...,pn are some vectors in a Euclidean space (V, , ). If there exists a vector v V and r> 0 such that h· ·i ∈ pi v,pi v = pi v 2 = r for all i = 1,...,n, h − − i k − k then we say that p1,...,pn lie on the surface of a hypersphere. Now the Euclidean distance matrix [ pi pj 2] is called a circum EDM. A well-known result ([10, Theorem k − k

15 3.4]) characterizes all circum EDMs. This says that E is a circum EDM if and if there exists a vector s and a β such that

Es = β1 and s′1 = 1.

This is equivalent to saying that E is a circum EDM if and only if 1′E†1 > 0. We now introduce the following definition.

Definition 2.16. We say that D = [dij] is a circum GEDM if there exist a, b > 0 and vectors x1,...,xn on the surface of a hypersphere such that

i j 2 dij = ax bx , k − k n i where i=1 x = 0. We nowP have the following result for GEDMs.

Theorem 2.17. The following are equivalent. (1) D is a circum GEDM. (2) 1′D†1 > 0.

1 n Proof. As D = [dij] is a GEDM, there exist vectors x ,...,x such that

n i i j 2 x =0 and dij = ax bx . k − k j X=1 Define

i j 2 eij := [ x x ] and E := [eij]. k − k Assume (1). Then, x1,...,xn lie on the surface of a hypersphere. So, E is a circum EDM and hence 1′E†1 > 0. By Theorem 2.15, 1′D†1 > 0. This proves (2). 1 n Assume (2). Then by Theorem 2.15, 1′E†1 > 0. So, x ,...,x lie on the surface of a hypersphere. Hence D is a circum GEDM.

Corollary 2.18. If D is a circum GEDM, then (D) = rank(L) + 1; otherwise rank(D) = rank(L) + 2.

Proof. Let E = LJ+JL 2L. By Corollary 2.15, rank(D) = rank(E). By the previous − result D is a circum GEDM if and only if 1′D†1 > 0. By Theorem 2.15, 1′E†1 > 0 if and only if 1′D†1e> 0. Ine view of Proposition 1 in [8], we get

rank(L)+1 if 1 E 1 > 0 rank(E)= ′ † (rank(L) + 2 else.

The proof now follows easily.

We now obtain a formula to compute the Moore-Penrose inverse of a circum GEDM.

16 Theorem 2.19. If D is a circum GEDM, then

1 1 D† = L† + (D†1)(1′D†). −2ab 1′D†1 Proof. Let

1 α := 1′D†1, S := D† (D†1)(1′D†) and K := PDP. − α We now prove that SKS = S. Let x Rn. Then, ∈

x = c11 + c2f, for some c , c R and f 1 . Since S1 = 0, we see that 1 2 ∈ ∈ { }⊥

Sx = c2Sf.

As P 1 = 0 and Pf = f,

SPx = c2Sf.

Therefore, SP = S. In a similar manner, by using 1′S = 0 and 1′P = 0, we get PS = S. Thus, to prove SKS = S, it suffices to show that SDS = S. As 1′S = 0, we note that 1 SDS = (D† (D†1)(1′D†))DS − α = D†DS (26) 1 = D†D(D† (D†1)(1′D†)) − α = S.

Hence, SKS = S. We claim that KSK = K. Again by using PS = SP = S and P 1 = 0, we see that

KSK = (P DP )S(P DP ) = (PDS)(P DP ) = PDSDP 1 (27) = P D(D† D†11′D†)DP − α = P DP = K.

Since SK = KS = P , we conclude that K is the Moore-Penrose inverse of S. Thus,

(P DP )† = S.

17 Since L1 = 0 and L is symmetric, PL = LP = L and hence

P DP = 2abP LP − (28) = 2abL. − So,

1 (P DP )† = L†. −2ab Thus,

1 S = (P DP )† = D† (D†1)(1′D†). − α This completes the proof of the formula

1 1 D† = L† + (D†1)(1′D†). −2ab 1′D†1

2.6. Some majorization results Suppose all the eigenvalues of an n n matrix A are real. Let the eigenvalues of A be × λ1(A), . . . , λn(A), where

λ (A) λn(A). 1 ≥···≥

We now use λ(A) to denote the vector (λ1(A), . . . , λn(A))′. In the following, we obtain a Schur-type majorization result for GEDMs.

Theorem 2.20. diag(D) is majorized by λ(D).

Proof. In view of (6),

(a2 + b2)trace(L) b2s U ′DU = 2 . (29) a s 2abDiag(α2,...,αn)  −  Define

2 2 x := ((a + b )trace(L), 2abα ,..., 2abαn)′. − 2 − Without loss of generality, we can assume that

2abα 2abαn and l lnn. − 2 ≥···≥− − 11 ≥···≥− We now prove the following. Claim: (a b)2 diag(L) x. − ≺ Set α1 = 0. By the majorization result of Schur,

2ab diag(L) 2abλ(L). − ≺−

18 So, for each 1 k n 1, ≤ ≤ − k k 2ab lii 2ab αk. (30) − ≤− i i X=1 X=1 As L is positive semidefinite, for each 1 k n 1, ≤ ≤ − k 2 2 2 2 (a + b ) lii (a + b )trace(L). (31) ≤ i X=1 Using (30) and (31), for each 1 k n 1, we find that ≤ ≤ − k k 2 2 2 (a b) lii (a + b )trace(L) 2ab αi. − ≤ − i i X=1 X=1 Furthermore,

n n 2 2 2 (a b) lii = (a + b )trace(L) 2ab αi. − − i i X=1 X=1 Hence,

(a b)2 diag(L) x. − ≺ This proves the claim. By an easy verification,

diag(D) = (a b)2 diag(L). − Therefore,

diag(D) x. (32) ≺ We now recall equation (7):

2 2 (a + b )s1 abs2 . . . absn abs 2abα . . . 0 1 2 2 W − U ′DUW =  . − . . .  . (33) . . .. .    absn 0 . . . 2abαn   −    1 1 Since W − U ′DUW is symmetric and diag(W − U ′DUW )= x, by Schur majorization result,

1 x λ(W − U ′DUW )= λ(D). (34) ≺ By (32) and (34),

diag(D) λ(D). ≺

19 The proof is complete.

We now prove another result.

′ Theorem 2.21. λ(D) λ( D+D ). ≺ 2 Proof. By (6),

(a2 + b2)trace(L) b2s U ′DU = 2 . a s 2abDiag(α ,...,αn)  − 2  Hence,

2(a2 + b2)trace(L) (a2 + b2)s U ′(D + D′)U = 2 2 . (a + b )s 4abDiag(α2,...,αn)  −  By (7),

2 2 1 (a + b )trace(L) abs W − U ′DUW = . abs 2ab Diag(α ,...,αn)  − 2  Put

1 F := W − U ′DUW.

Since

a2 + b2 ab , ≤ 2 we can write

0 αs U (D + D )U = 2F + , ′ ′ αs 0   for some α> 0. Put

0 s Q := . s 0   We now have

U ′(D + D′)U αQ = 2F. − By Ky Fan majorization theorem,

λ(U ′(D + D′)U αQ) λ(D + D′) αλ(Q). − ≺ − As,

λ(Q) = (λ (Q), 0,..., 0, λ (Q)) and λ (Q) > 0, 1 − 1 1

20 we have

λ(D + D′) αλ(Q) λ(D + D′). − ≺ Thus,

λ(U ′(D + D′)U αQ) λ(D + D′), − ≺ i.e.

λ(2F ) λ(D + D′). ≺ Since

λ(2F )= λ(2D), we get

D + D λ(D) λ( ′ ). ≺ 2 This completes the proof.

As an immediate corollary of the above result, we have the following.

′ Corollary 2.22. ρ(D) ρ( D+D ). ≤ 2

3. Application

We end this paper with an application.

3.1. Constructing infinitely divisible matrices Generalized distance matrices can be used to construct infinitely divisible matrices. Recall that a symmetric matrix E = [eij] is infinitely divisible if E is an entry-wise nonnegative and [er ] is a positive semidefinite matrix for all r 0. ij ≥ Theorem 3.1. Let S = [sij] be an n n generalized Laplacian matrix. If a, b > 0, define ×

2 2 dij := a sii + b sjj 2absij, fij := max (dij, dji); −

D := [dij] and F := [fij].

1 If rank(S)= n 1, then each fij > 0 and [ ] is an infinitely divisible matrix. − fij

Proof. We claim that dij > 0 for all i, j. Since S1 = 0, all cofactors of S are equal. Hence all principal minors of S are positive. In particular, every 2 2 principal sub- × matrix of S is positive definite. So, dij > 0 and thus fij > 0 for all i, j.

21 Put G := [fij] := [max(dij, dji)]. By Theorem 4.2.9 in Bapat [2], it suffices to show that G has exactly one (simple) positive eigenvalue. We shall use the following identity: If α and β are positive, then

1 max (α, β)= [(α + β)+ α β ]. 2 | − | Hence,

2G = [dij + dji] + [ dij dji ] | − | has exactly one simple positive eigenvalue. Put

A := [dij + dji] and B := [ dij dji ]. | − |

If D = [dij], then for any x 1 , ∈ ⊥

x′Dx = 2ab(x′Sx) 0, − ≤ and hence x′Ax 0. We now claim that B is negative semidefinite on 1⊥ as well. It can be noted easily≤ that

2 2 dij dji = a b sii sjj . | − | | − || − | If α, β 0, then ≥ α β = α + β 2min(α, β), | − | − and therefore,

2 2 dij dji = a b (sii + sjj 2min(sii,sjj)). | − | | − | −

It is well known that min(sii,sjj) is positive semidefinite. So, B is negative semidefinite on 1⊥, and consequently, 2G = A + B is negative semidefinite on 1⊥. Thus, G has at least n 1 non-positive eigenvalues. Since diagonal entries of G are positive, G has at least one− positive eigenvalue. Thus, G has exactly one simple positive eigenvalue. This completes the proof.

Funding

The second author acknowledges the support of the Indian National Science Academy under the INSA Senior Scientist scheme.

References

[1] Ando T. Inequalities for M-matrices. Linear . 1980;8:291–316. [2] Bapat RB. Multinomial probabilities, permanents and a conjecture of Karlin and Rinott. Proc Am Math Soc. 1988;102(3):467–472.

22 [3] Bapat RB, Raghavan TES. Nonnegative matrices and applications, Encyclopedia of Math- ematics and its Applications. Cambridge: Cambridge University Press; 1997. [4] Bhatia R, Jain T. Mean matrices and conditional negativity. Electron J . 2016;29:206–222. [5] Carlson D, Haynsworth E, Markham T. A generalization of the Schur complement by means of the Moore-Penrose inverse. SIAM J Appl Math. 1974;26:169–175. [6] Ferland JA. Matrix-Theoretic criteria for the quasiconvexity of twice continuously differ- entiable functions. Linear Algebra Appl. 1981;38:51–63. [7] Fiedler M. Elliptic matrices with zero diagonal. Linear Algebra Appl. 2011;197/198:337– 347. [8] Kurata H, Bapat RB. Moore-Penrose inverse of a hollow symmetric matrix and a predis- tance matrix. Spec matrices. 2016;4:270–282. [9] Micchelli CA. Interpolation of scattered data: distance matrices and conditionally positive definite matrices. Constr Approx. 1986;2:11–22. [10] Tarazaga P, Hayden TL, Wells J. Circum-Euclidean distance matrices and faces. Linear Algebra Appl. 1996;232:77–96. [11] Zhang F. Matrix theory, Basic results and techniques. New York: Springer; 1991.

23