Journal of Inequalities in Pure and Applied Mathematics

STOLARSKY OF SEVERAL VARIABLES

volume 6, issue 2, article 30, EDWARD NEUMAN 2005. Department of Mathematics Received 19 October, 2004; Southern Illinois University accepted 24 February, 2005. Carbondale, IL 62901-4408, USA Communicated by: Zs. Páles EMail: [email protected] URL: http://www.math.siu.edu/neuman/personal.html

Abstract Contents JJ II J I Home Page Go Back Close c 2000 Victoria University ISSN (electronic): 1443-5756 Quit 201-04 Abstract

A generalization of the Stolarsky means to the case of several variables is pre- sented. The new means are derived from the logarithmic of several vari- ables studied in [9]. Basic properties and inequalities involving means under discussion are included. Limit theorems for these means with the underlying measure being the Dirichlet measure are established.

Stolarsky Means of Several Variables 2000 Mathematics Subject Classification: 33C70, 26D20 Key words: Stolarsky means, Dresher means, Dirichlet averages, Totally positive Edward Neuman functions, Inequalities

The author is indebted to a referee for several constructive comments on the first draft of this paper. Title Page Contents Contents JJ II 1 Introduction and Notation ...... 3 J I 2 Elementary Properties of Er,s(µ; X) ...... 7 Go Back 3 The Mean Er,s(b; X) ...... 14 r−s Close A Appendix: Total Positivity of Er,s (x, y) ...... 19 References Quit Page 2 of 22

J. Ineq. Pure and Appl. Math. 6(2) Art. 30, 2005 http://jipam.vu.edu.au 1. Introduction and Notation In 1975 K.B. Stolarsky [16] introduced a two-parameter family of bivariate means named in mathematical literature as the Stolarsky means. Some authors call these means the extended means (see, e.g., [6, 7]) or the difference means (see [10]). For r, s ∈ R and two positive numbers x and y (x 6= y) they are defined as follows [16]

 1  r r  r−s  s x − y  , rs(r − s) 6= 0; Stolarsky Means of Several  r xs − ys Variables   r r   1 x ln x − y ln y  Edward Neuman exp − + , r = s 6= 0;  r xr − yr (1.1) Er,s(x, y) = 1  r r  r Title Page  x − y  , r 6= 0, s = 0;  r(ln x − ln y) Contents  √  xy, r = s = 0. JJ II

The mean Er,s(x, y) is symmetric in its parameters r and s and its variables x J I and y as well. Other properties of Er,s(x, y) include homogeneity of degree Go Back one in the variables x and y and monotonicity in r and s. It is known that E r,s Close increases with an increase in either r or s (see [6]). It is worth mentioning that the Stolarsky mean admits the following integral representation ([16]) Quit Page 3 of 22 1 Z s (1.2) ln E (x, y) = ln I dt r,s s − r t r J. Ineq. Pure and Appl. Math. 6(2) Art. 30, 2005 http://jipam.vu.edu.au (r 6= s), where It ≡ It(x, y) = Et,t(x, y) is the identric mean of order t. J. Pecariˇ c´ and V. Šimic´ [15] have pointed out that

1 Z 1 r−s  r−s s s s (1.3) Er,s(x, y) = tx + (1 − t)y dt 0 (s(r − s) 6= 0). This representation shows that the Stolarsky means belong to a two-parameter family of means studied earlier by M.D. Tobey [18]. A comparison theorem for the Stolarsky means have been obtained by E.B. Leach Stolarsky Means of Several and M.C. Sholander in [7] and independently by Zs. Páles in [13]. Other results Variables for the means (1.1) include inequalities, limit theorems and more (see, e.g., Edward Neuman [17, 4, 6, 10, 12]). In the past several years researchers made an attempt to generalize Stolarsky Title Page means to several variables (see [6, 18, 15, 8]). Further generalizations include so-called functional Stolarsky means. For more details about the latter class of Contents means the interested reader is referred to [14] and [11]. JJ II To facilitate presentation let us introduce more notation. In what follows, the symbol En−1 will stand for the Euclidean simplex, which is defined by J I  Go Back En−1 = (u1, . . . , un−1): ui ≥ 0, 1 ≤ i ≤ n − 1, u1 + ··· + un−1 ≤ 1 . Close Further, let X = (x1, . . . , xn) be an n-tuple of positive numbers and let Xmin = Quit min(X), Xmax = max(X). The following Page 4 of 22 Z n Z Y ui (1.4) L(X) = (n − 1)! xi du = (n − 1)! exp(u · Z) du J. Ineq. Pure and Appl. Math. 6(2) Art. 30, 2005 En−1 i=1 En−1 http://jipam.vu.edu.au is the special case of the of X which has been introduced in [9]. Here u = (u1, . . . , un−1, 1−u1 −· · ·−un−1) where (u1, . . . , un−1) ∈ En−1, du = du1 . . . dun−1, Z = ln(X) = (ln x1,..., ln xn), and x · y = x1y1 + ··· + xnyn is the dot product of two vectors x and y. Recently J. Merikowski [8] has proposed the following generalization of the Stolarsky mean Er,s to several variables

1 L(Xr) r−s (1.5) Er,s(X) = s L(X ) Stolarsky Means of Several Variables (r 6= s), where Xr = (xr, . . . , xr ). In the paper cited above, the author did not 1 n Edward Neuman prove that Er,s(X) is the mean of X, i.e., that

(1.6) Xmin ≤ Er,s(X) ≤ Xmax Title Page Contents holds true. If n = 2 and rs(r − s) 6= 0 or if r 6= 0 and s = 0, then (1.5) simplifies to (1.1) in the stated cases. JJ II This paper deals with a two-parameter family of multivariate means whose J I prototype is given in (1.5). In order to define these means let us introduce more Go Back notation. By µ we will denote a probability measure on En−1. The logarithmic mean L(µ; X) with the underlying measure µ is defined in [9] as follows Close

n Z Y Z Quit (1.7) L(µ; X) = xui µ(u) du = exp(u · Z)µ(u) du. i Page 5 of 22 En−1 i=1 En−1

J. Ineq. Pure and Appl. Math. 6(2) Art. 30, 2005 http://jipam.vu.edu.au We define

 1 L(µ; Xr) r−s  , r 6= s  L(µ; Xs) (1.8) Er,s(µ; X) =   d  exp ln L(µ; Xr) , r = s.  dr

Let us note that for µ(u) = (n − 1)!, the Lebesgue measure on En−1, the first part of (1.8) simplifies to (1.5). Stolarsky Means of Several Variables In Section 2 we shall prove that Er,s(µ; X) is the mean value of X, i.e., it satisfies inequalities (1.6). Some elementary properties of this mean are also Edward Neuman derived. Section 3 deals with the limit theorems for the new mean, with the probability measure being the Dirichlet measure. The latter is denoted by µb, Title Page n where b = (b1, . . . , bn) ∈ R+, and is defined as [2] Contents n 1 Y (1.9) µ (u) = ubi−1, JJ II b B(b) i i=1 J I where B(·) is the multivariate beta function, (u1, . . . , un−1) ∈ En−1, and un = Go Back 1−u −· · ·−u 1 n−1. In the Appendix we shall prove that under certain conditions Close r−s imposed on the parameters r and s, the function Er,s (x, y) is strictly totally positive as a function of x and y. Quit Page 6 of 22

J. Ineq. Pure and Appl. Math. 6(2) Art. 30, 2005 http://jipam.vu.edu.au 2. Elementary Properties of Er,s(µ; X)

In order to prove that Er,s(µ; X) is a mean value we need the following version of the Mean-Value Theorem for integrals.  Proposition 2.1. Let α := Xmin < Xmax =: β and let f, g ∈ C [α, β] with g(t) 6= 0 for all t ∈ [α, β]. Then there exists ξ ∈ (α, β) such that R f(u · X)µ(u) du f(ξ) (2.1) En−1 = . R g(u · X)µ(u) du g(ξ) Stolarsky Means of Several En−1 Variables Proof. Let the numbers γ and δ and the function φ be defined in the following Edward Neuman way Z Z Title Page γ = g(u · X)µ(u) du, δ = f(u · X)µ(u) du, Contents En−1 En−1 φ(t) = γf(t) − δg(t). JJ II Letting t = u · X and, next, integrating both sides against the measure µ, we J I Go Back obtain Z φ(u · X)µ(u) du = 0. Close En−1 Quit On the other hand, application of the Mean-Value Theorem to the last integral gives Page 7 of 22 Z φ(c · X) µ(u) du = 0, E J. Ineq. Pure and Appl. Math. 6(2) Art. 30, 2005 n−1 http://jipam.vu.edu.au where c = (c1, . . . , cn−1, cn) with (c1, . . . , cn−1) ∈ En−1 and cn = 1 − c1 − · · · − cn−1. Letting ξ = c · X and taking into account that Z µ(u) du = 1 En−1 we obtain φ(ξ) = 0. This in conjunction with the definition of φ gives the desired result (2.1). The proof is complete.

The author is indebted to Professor Zsolt Páles for a useful suggestion re- Stolarsky Means of Several Variables garding the proof of Proposition 2.1. (p) Edward Neuman For later use let us introduce the symbol Er,s (µ; X) (p 6= 0), where

1 (p)  p  p (2.2) Er,s (µ; X) = Er,s(µ; X ) . Title Page Contents We are in a position to prove the following. n JJ II Theorem 2.2. Let X ∈ R+ and let r, s ∈ R. Then J I (i) Xmin ≤ Er,s(µ; X) ≤ Xmax, Go Back (ii) E (µ; λX) = λE (µ; X), λ > 0, (λX := (λx , . . . , λx )), r,s r,s 1 n Close (iii) Er,s(µ; X) increases with an increase in either r and s, Quit 1 (iv) ln E (µ; X) = R r ln E (µ; X) dt , r 6= s, Page 8 of 22 r,s r − s s t,t

(p) J. Ineq. Pure and Appl. Math. 6(2) Art. 30, 2005 (v) Er,s (µ; X) = Epr,ps(µ; X), http://jipam.vu.edu.au −1 −1 (vi) Er,s(µ; X)E−r,−s(µ; X ) = 1, (X := (1/x1,..., 1/xn)),

s−r p−r s−p (vii) Er,s (µ; X) = Er,p (µ; X)Ep,s (µ; X). Proof of (i). Assume first that r 6= s. Making use of (1.8) and (1.7) we obtain

1 "R exp r(u · Z)µ(u) du# r−s E (µ; X) = En−1 . r,s R exp s(u · Z)µ(u) du En−1

Stolarsky Means of Several Application of (2.1) with f(t) = exp(rt) and g(t) = exp(st) gives Variables

1 Edward Neuman " # r−s exp r(c · Z) E (µ; X) = = exp(c · Z), r,s exp s(c · Z) Title Page where c = (c1, . . . , cn−1, cn) with (c1, . . . , cn−1) ∈ En−1 and cn = 1 − c1 − Contents · · · − c . Since c · Z = c ln x + ··· + c ln x , ln X ≤ c · Z ≤ ln X . n−1 1 1 n n min max JJ II This in turn implies that Xmin ≤ exp(c · Z) ≤ Xmax. This completes the proof of (i) when r 6= s. Assume now that r = s. It follows from (1.8) and (1.7) that J I "R (u · Z) exp r(u · Z)µ(u) du# Go Back ln E (µ; X) = En−1 . r,r R exp r(u · Z)µ(u) du Close En−1 Quit Application of (2.1) to the right side with f(t) = t exp(rt) and g(t) = exp(rt) Page 9 of 22 gives " # (c · Z) exp r(c · Z) ln E (µ; X) = = c · Z. J. Ineq. Pure and Appl. Math. 6(2) Art. 30, 2005 r,r exp r(c · Z) http://jipam.vu.edu.au Since ln Xmin ≤ c · Z ≤ ln Xmax, the assertion follows. This completes the proof of (i). Proof of (ii). The following result

(2.3) Lµ;(λx)r = λrL(µ; Xr)

(λ > 0) is established in [9, (2.6)]. Assume that r 6= s. Using (1.8) and (2.3) we obtain 1 λrL(µ; Xr) r−s Stolarsky Means of Several E (µ; λx) = = λE (µ; X). Variables r,s λsL(µ; Xs) r,s Edward Neuman Consider now the case when r = s 6= 0. Making use of (1.8) and (2.3) we obtain Title Page  d  E (µ; λX) = exp ln Lµ;(λX)r Contents r,r dr  d  JJ II = exp ln λrL(µ; Xr) dr J I  d  = exp r ln λ + ln L(µ; Xr) = λE (µ; X). Go Back dr r,r Close When r = s = 0, an easy computation shows that Quit n Page 10 of 22 Y wi (2.4) E0,0(µ; X) = xi ≡ G(w; X), i=1 J. Ineq. Pure and Appl. Math. 6(2) Art. 30, 2005 http://jipam.vu.edu.au where Z (2.5) wi = uiµ(u) du En−1 (1 ≤ i ≤ n) are called the natural weights or partial moments of the measure µ and w = (w1, . . . , wn). Since w1 + ··· + wn = 1, E0,0(µ; λX) = λE0,0(µ; X). The proof of (ii) is complete.

Proof of (iii). In order to establish the asserted property, let us note that the Stolarsky Means of Several function r → exp(rt) is logarithmically convex (log-convex) in r. This in Variables r conjunction with Theorem B.6 in [2], implies that a function r → L(µ; X ) is Edward Neuman also log-convex in r. It follows from (1.8) that

ln L(µ; Xr) − ln L(µ; Xs) Title Page ln Er,s(µ; X) = . r − s Contents The right side is the divided difference of order one at r and s. Convexity of JJ II ln L(µ; Xr) in r implies that the divided difference increases with an increase J I in either r and s. This in turn implies that ln Er,s(µ; X) has the same property. Hence the monotonicity property of the mean Er,s in its parameters follows. Go Back Now let r = s. Then (1.8) yields Close

d  r  Quit ln Er,r(µ; X) = ln L(µ; X ) . dr Page 11 of 22 Since ln L(µ; Xr) is convex in r, its with respect to r increases with r J. Ineq. Pure and Appl. Math. 6(2) Art. 30, 2005 an increase in . This completes the proof of (iii). http://jipam.vu.edu.au Proof of (iv). Let r 6= s. It follows from (1.8) that Z r Z r 1 1 d  t  ln Et,t(µ; X) dt = ln L(µ; X ) dt r − s s r − s s dt 1 =  ln L(µ; Xr) − ln L(µ; Xs) r − s

= ln Er,s(µ; X).

Proof of (v). Let r 6= s. Using (2.2) and (1.8) we obtain Stolarsky Means of Several Variables

1 1 L(Xpr) p(r−s) Edward Neuman E (p)(µ; X) = E (µ; Xp) p = = E (µ; X). r,s r,s L(Xps) pr,ps Title Page Assume now that r = s 6= 0. Making use of (2.2), (1.8) and (1.7) we have Contents 1 d  E (p)(µ; X) = exp ln L(µ; Xpr) r,r p dr JJ II  1 Z  J I = exp (u · Z) exp pr(u · Z)µ(u) du pr Go Back L(µ; X ) En−1

= Epr,pr(µ; X). Close Quit The case when r = s = 0 is trivial because E0,0(µ; X) is the weighted geometric mean of X. Page 12 of 22 p = −1 E (µ; X−1)−1 = Proof of (vi). Here we use (v) with to obtain r,s J. Ineq. Pure and Appl. Math. 6(2) Art. 30, 2005 −1 E−r,−s(µ; X). Letting X := X we obtain the desired result. http://jipam.vu.edu.au Proof of (vii). There is nothing to prove when either p = r or p = s or r = s. In other cases we use (1.8) to obtain the asserted result. This completes the proof. In the next theorem we give some inequalities involving the means under discussion. Theorem 2.3. Let r, s ∈ R. Then the following inequalities

(2.6) Er,r(µ; X) ≤ Er,s(µ; X) ≤ Es,s(µ; X) Stolarsky Means of Several Variables are valid provided r ≤ s. If s > 0, then Edward Neuman (2.7) Er−s,0(µ; X) ≤ Er,s(µ; X). Inequality (2.7) is reversed if s < 0 and it becomes an equality if s = 0. Assume Title Page that r, s > 0 and let p ≤ q. Then Contents E (p)(µ; X) ≤ E (q)(µ; X) (2.8) r,s r,s JJ II with the inequality reversed if r, s < 0. J I Proof. Inequalities (2.6) and (2.7) follow immediately from Part (iii) of Theo- Go Back rem 2.2. For the proof of (2.8), let r, s > 0 and let p ≤ q. Then pr ≤ qr and Close ps ≤ qs. Applying Parts (v) and (iii) of Theorem 2.2, we obtain Quit E (p)(µ; X) = E (µ; X) ≤ E (µ; X) = E (q)(µ; X). r,s pr,ps qr,qs r,s Page 13 of 22 When r, s < 0, the proof of (2.8) goes along the lines introduced above, hence it is omitted. The proof is complete. J. Ineq. Pure and Appl. Math. 6(2) Art. 30, 2005 http://jipam.vu.edu.au 3. The Mean Er,s(b; X)

An important probability measure on En−1 is the Dirichlet measure µb(u), b ∈ n R+ (see (1.9)). Its role in the theory of special functions is well documented in Carlson’s monograph [2]. When µ = µb, the mean under discussion will be denoted by Er,s(b; X). The natural weights wi (see (2.5)) of µb are given explicitly by

(3.1) wi = bi/c Stolarsky Means of Several (1 ≤ i ≤ n), where c = b1 + ··· + bn (see [2, (5.6-2)]). For later use we define Variables w = (w1, . . . , wn). Recall that the weighted Dresher mean Dr,s(p; X) of order 2 n n Edward Neuman (r, s) ∈ R of X ∈ R+ with weights p = (p1, . . . , pn) ∈ R+ is defined as  n 1 P p xr  r−s  i=1 i i Title Page  Pn s , r 6= s  pix  i=1 i Contents (3.2) Dr,s(p; X) =  Pn p xr ln x   i=1 i i i JJ II exp Pn r , r = s i=1 pixi J I (see, e.g., [1, Sec. 24]). Go Back In this section we present two limit theorems for the mean Er,s with the un- derlying measure being the Dirichlet measure. In order to facilitate presentation Close we need a concept of the Dirichlet average of a function. Following [2, Def. 5.2- Quit 1] let Ω be a convex set in and let Y = (y , . . . , y ) ∈ Ωn, n ≥ 2. Further, let C 1 n Page 14 of 22 f be a measurable function on Ω. Define Z J. Ineq. Pure and Appl. Math. 6(2) Art. 30, 2005 (3.3) F (b; Y ) = f(u · Y )µb(u) du. http://jipam.vu.edu.au En−1 Then F is called the Dirichlet average of f with variables Y = (y1, . . . , yn) and parameters b = (b1, . . . , bn). We need the following result [2, Ex. 6.3-4]. Let Ω be an open circular disk in C, and let f be holomorphic on Ω. Let Y ∈ Ωn, c ∈ C, c 6= 0, −1,..., and w1 + ··· + wn = 1. Then n X (3.4) lim F (cw; Y ) = wif(yi), c→0 i=1 where cw = (cw1, . . . , cwn). We are in a position to prove the following. Stolarsky Means of Several Variables Theorem 3.1. Let w1 > 0, . . . , wn > 0 with w1 + ··· + wn = 1. If r, s ∈ R and n Edward Neuman X ∈ R+, then lim Er,s(cw; X) = Dr,s(w; X). + c→0 Title Page Proof. We use (1.7) and (3.3) to obtain L(cw; X) = F (cw; Z), where Z = Contents ln X = (ln x1,..., ln xn). Making use of (3.4) with f(t) = exp(t) and Y = ln X we obtain n JJ II X lim L(cw; X) = wixi. J I c→0+ i=1 Go Back Hence n Close r X r (3.5) lim L(cw; X ) = wixi . c→0+ Quit i=1 Assume that r 6= s. Application of (3.5) to (1.8) gives Page 15 of 22

1 1  r  r−s Pn r  r−s L(cw; X ) i=1 wixi J. Ineq. Pure and Appl. Math. 6(2) Art. 30, 2005 lim Er,s(cw; X) = lim = n = Dr,s(w; X). http://jipam.vu.edu.au + + s P s c→0 c→0 L(cw; X ) i=1 wixi Let r = s. Application of (3.4) with f(t) = t exp(rt) gives

n n X X r lim F (cw; Z) = wizi exp(rzi) = wi(ln xi)xi . c→0+ i=1 i=1 This in conjunction with (3.5) and (1.8) gives  F (cw; Z)  lim Er,r(cw; X) = lim exp c→0+ c→0+ L(cw; Xr) Pn w xr ln x  Stolarsky Means of Several = exp i=1 i i i Variables Pn w xr i=1 i i Edward Neuman = Dr,r(w; X). This completes the proof. Title Page Theorem 3.2. Under the assumptions of Theorem 3.1 one has Contents JJ II (3.6) lim Er,s(cw; X) = G(w; X). c→∞ J I Proof. The following limit (see [9, (4.10)]) Go Back (3.7) lim L(cw; X) = G(w; X) Close c→∞ will be used in the sequel. We shall establish first (3.6) when r 6= s. It follows Quit from (1.8) and (3.7) that Page 16 of 22

1  r  r−s 1 L(cw; X ) J. Ineq. Pure and Appl. Math. 6(2) Art. 30, 2005  r−s r−s lim Er,s(cw; X) = lim = G(w; X) = G(w; X). http://jipam.vu.edu.au c→∞ c→∞ L(cw; Xs) Assume that r = s. We shall prove first that

(3.8) lim F (cw; Z) =  ln G(w; X)G(w; X)r, c→∞ where F is the Dirichlet average of f(t) = t exp(rt). Averaging both sides of

∞ X rm f(t) = tm+1 m! m=0 Stolarsky Means of Several we obtain Variables

∞ Edward Neuman X rm (3.9) F (cw; Z) = R (cw; Z), m! m+1 m=0 Title Page m+1 where Rm+1 stands for the Dirichlet average of the power function t . We Contents will show that the series in (3.9) converges uniformly in 0 < c < ∞. This in turn implies further that as c → ∞, we can proceed to the limit term by term. JJ II Making use of [2, 6.2-24)] we obtain J I

m+1 Go Back |Rm+1(cw; Z)| ≤ |Z| , m ∈ N,  Close where |Z| = max | ln xi| : 1 ≤ i ≤ n . By the Weierstrass M test the series in (3.9) converges uniformly in the stated domain. Taking limits on both sides Quit Page 17 of 22

J. Ineq. Pure and Appl. Math. 6(2) Art. 30, 2005 http://jipam.vu.edu.au of (3.9) we obtain with the aid of (3.4)

∞ X rm lim F (cw; Z) = lim Rm+1(cw; Z) c→∞ m! c→∞ m=0 ∞ n !m+1 X rm X = w z m! i i m=0 i=1 ∞ m X r m =  ln G(w; X)  ln G(w; X) m! Stolarsky Means of Several m=0 Variables ∞ X 1 m Edward Neuman =  ln G(w; X)  ln G(w; X)r m! m=0 =  ln G(w; X)G(w; X)r. Title Page Contents This completes the proof of (3.8). To complete the proof of (3.6) we use (1.8), (3.7), and (3.8) to obtain JJ II F (cw; Z) J I lim ln Er,r(µ; X) = lim c→∞ c→∞ L(cw; Xr) Go Back   r ln G(w; X) G(w; X) Close = r = ln G(w; X). G(w; X) Quit Hence the assertion follows. Page 18 of 22

J. Ineq. Pure and Appl. Math. 6(2) Art. 30, 2005 http://jipam.vu.edu.au r−s A. Appendix: Total Positivity of Er,s (x, y) A real-valued function h(x, y) of two real variables is said to be strictly totally positive on its domain if every n×n determinant with elements h(xi, yj), where x1 < x2 < ··· < xn and y1 < y2 < ··· < yn is strictly positive for every n = 1, 2,... (see [5]). r−s The goal of this section is to prove that the function Er,s (x, y) is strictly totally positive as a function of x and y provided the parameters r and s satisfy a certain condition. For later use we recall the definition of the R-hypergeometric Stolarsky Means of Several 0 0 Variables function R−α(β, β ; x, y) of two variables x, y > 0 with parameters β, β > 0 Edward Neuman 0 Z 1 Γ(β + β ) 0 −α (A1) R (β, β0; x, y) = uβ−1(1−u)β −1ux+(1−u)y du −α Γ(β)Γ(β0) 0 Title Page (see [2, (5.9-1)]). Contents r−s Proposition A.1. Let x, y > 0 and let r, s ∈ R. If |r| < |s|, then Er,s (x, y) is 2 JJ II strictly totally positive on R+. J I Proof. Using (1.3) and (A1) we have Go Back r−s s s (A2) E (x, y) = R r−s (1, 1; x , y ) Close r,s s

0 Quit (s(r−s) 6= 0). B. Carlson and J. Gustafson [3] have proven that R−α(β, β ; x, y) is strictly totally positive in x and y provided β, β0 > 0 and 0 < α < β + β0. Page 19 of 22 Letting α = 1 − r/s, β = β0 = 1, x := xs, y := ys, and next, using (A2) we obtain the desired result. J. Ineq. Pure and Appl. Math. 6(2) Art. 30, 2005 http://jipam.vu.edu.au Corollary A.2. Let 0 < x1 < x2, 0 < y1 < y2 and let the real numbers r and s satisfy the inequality |r| < |s|. If s > 0, then

(A3) Er,s(x1, y1)Er,s(x2, y2) < Er,s(x1, y2)Er,s(x2, y1).

Inequality (A3) is reversed if s < 0.

r−s Proof. Let aij = Er,s (xi, yj) (i, j = 1, 2). It follows from Proposition A.1 that  det [aij] > 0 provided |r| < |s|. This in turn implies Stolarsky Means of Several  r−s  r−s Variables Er,s(x1, y1)Er,s(x2, y2) > Er,s(x1, y2)Er,s(x2, y1) . Edward Neuman Assume that s > 0. Then the inequality |r| < s implies r − s < 0. Hence (A3) follows when s > 0. The case when s < 0 is treated in a similar way. Title Page Contents JJ II J I Go Back Close Quit Page 20 of 22

J. Ineq. Pure and Appl. Math. 6(2) Art. 30, 2005 http://jipam.vu.edu.au References [1] E.F. BECKENBACH AND R. BELLMAN, Inequalities, Springer-Verlag, Berlin, 1961. [2] B.C. CARLSON, Special Functions of Applied Mathematics, Academic Press, New York, 1977.

[3] B.C. CARLSON AND J.L. GUSTAFSON, Total positivity of mean values and hypergeometric functions, SIAM J. Math. Anal., 14(2) (1983), 389– Stolarsky Means of Several 395. Variables

[4] P. CZINDER AND ZS. PÁLES An extension of the Hermite-Hadamard in- Edward Neuman equality and an application for Gini and Stolarsky means, J. Ineq. Pure Appl. Math., 5(2) (2004), Art. 42. [ONLINE: http://jipam.vu. Title Page edu.au/article.php?sid=399]. Contents [5] S. KARLIN, Total Positivity, Stanford Univ. Press, Stanford, CA, 1968.

[6] E.B. LEACH AND M.C. SHOLANDER, Extended mean values, Amer. JJ II Math. Monthly, 85(2) (1978), 84–90. J I [7] E.B. LEACH AND M.C. SHOLANDER, Multi-variable extended mean Go Back values, J. Math. Anal. Appl., 104 (1984), 390–407. Close [8] J.K. MERIKOWSKI, Extending means of two variables to several vari- Quit ables, J. Ineq. Pure Appl. Math., 5(3) (2004), Art. 65. [ONLINE: http: //jipam.vu.edu.au/article.php?sid=411]. Page 21 of 22

[9] E. NEUMAN, The weighted logarithmic mean, J. Math. Anal. Appl., 188 J. Ineq. Pure and Appl. Math. 6(2) Art. 30, 2005 (1994), 885–900. http://jipam.vu.edu.au [10] E. NEUMAN AND ZS. PÁLES, On comparison of Stolarsky and Gini means, J. Math. Anal. Appl., 278 (2003), 274–284.

[11] E. NEUMAN, C.E.M. PEARCE, J. PECARIˇ C´ AND V. ŠIMIC,´ The gener- alized Hadamard inequality, g-convexity and functional Stolarsky means, Bull. Austral. Math. Soc., 68 (2003), 303–316.

[12] E. NEUMAN AND J. SÁNDOR, Inequalities involving Stolarsky and Gini means, Math. Pannonica, 14(1) (2003), 29–44. Stolarsky Means of Several [13] ZS. PÁLES, Inequalities for differences of powers, J. Math. Anal. Variables Appl., 131 (1988), 271–281. Edward Neuman [14] C.E.M. PEARCE, J. PECARIˇ C´ AND V. ŠIMIC,´ Functional Stolarsky means, Math. Inequal. Appl., 2 (1999), 479–489. Title Page

[15] J. PECARIˇ C´ AND V. ŠIMIC,´ The Stolarsky-Tobey mean in n variables, Contents Math. Inequal. Appl., 2 (1999), 325–341. JJ II [16] K.B. STOLARSKY, Generalizations of the logarithmic mean, Math. Mag., 48(2) (1975), 87–92. J I Go Back [17] K.B. STOLARSKY, The power and generalized logarithmic means, Amer. Math. Monthly, 87(7) (1980), 545–548. Close Quit [18] M.D. TOBEY, A two-parameter homogeneous mean value, Proc. Amer. Math. Soc., 18 (1967), 9–14. Page 22 of 22

J. Ineq. Pure and Appl. Math. 6(2) Art. 30, 2005 http://jipam.vu.edu.au