Journal of Inequalities in Pure and Applied Mathematics http://jipam.vu.edu.au/

Volume 6, Issue 2, Article 30, 2005

STOLARSKY OF SEVERAL VARIABLES EDWARD NEUMAN DEPARTMENT OF MATHEMATICS SOUTHERN ILLINOIS UNIVERSITY CARBONDALE, IL 62901-4408, USA [email protected] URL: http://www.math.siu.edu/neuman/personal.html

Received 19 October, 2004; accepted 24 February, 2005 Communicated by Zs. Páles

ABSTRACT. A generalization of the Stolarsky means to the case of several variables is pre- sented. The new means are derived from the logarithmic of several variables studied in [9]. Basic properties and inequalities involving means under discussion are included. Limit the- orems for these means with the underlying measure being the Dirichlet measure are established.

Key words and phrases: Stolarsky means, Dresher means, Dirichlet averages, Totally positive functions, Inequalities.

2000 Mathematics Subject Classification. 33C70, 26D20.

1. INTRODUCTIONAND NOTATION In 1975 K.B. Stolarsky [16] introduced a two-parameter family of bivariate means named in mathematical literature as the Stolarsky means. Some authors call these means the extended means (see, e.g., [6, 7]) or the difference means (see [10]). For r, s ∈ R and two positive numbers x and y (x 6= y) they are defined as follows [16]  1 s xr − yr  r−s  , rs(r − s) 6= 0;  s s  r x − y    1 xr ln x − yr ln y  exp − + , r = s 6= 0;  r xr − yr (1.1) Er,s(x, y) =  1  xr − yr  r  , r 6= 0, s = 0;  r(ln x − ln y)  √  xy, r = s = 0.

The mean Er,s(x, y) is symmetric in its parameters r and s and its variables x and y as well. Other properties of Er,s(x, y) include homogeneity of degree one in the variables x and y and

ISSN (electronic): 1443-5756 c 2005 Victoria University. All rights reserved. The author is indebted to a referee for several constructive comments on the first draft of this paper. 201-04 2 EDWARD NEUMAN

monotonicity in r and s. It is known that Er,s increases with an increase in either r or s (see [6]). It is worth mentioning that the Stolarsky mean admits the following integral representation ([16]) 1 Z s (1.2) ln Er,s(x, y) = ln It dt s − r r

(r 6= s), where It ≡ It(x, y) = Et,t(x, y) is the identric mean of order t. J. Pecariˇ c´ and V. Šimic´ [15] have pointed out that

1 Z 1 r−s  r−s s s s (1.3) Er,s(x, y) = tx + (1 − t)y dt 0 (s(r − s) 6= 0). This representation shows that the Stolarsky means belong to a two-parameter family of means studied earlier by M.D. Tobey [18]. A comparison theorem for the Stolarsky means have been obtained by E.B. Leach and M.C. Sholander in [7] and independently by Zs. Páles in [13]. Other results for the means (1.1) include inequalities, limit theorems and more (see, e.g., [17, 4, 6, 10, 12]). In the past several years researchers made an attempt to generalize Stolarsky means to several variables (see [6, 18, 15, 8]). Further generalizations include so-called functional Stolarsky means. For more details about the latter class of means the interested reader is referred to [14] and [11]. To facilitate presentation let us introduce more notation. In what follows, the symbol En−1 will stand for the Euclidean simplex, which is defined by  En−1 = (u1, . . . , un−1): ui ≥ 0, 1 ≤ i ≤ n − 1, u1 + ··· + un−1 ≤ 1 .

Further, let X = (x1, . . . , xn) be an n-tuple of positive numbers and let Xmin = min(X), Xmax = max(X). The following Z n Z Y ui (1.4) L(X) = (n − 1)! xi du = (n − 1)! exp(u · Z) du En−1 i=1 En−1 is the special case of the of X which has been introduced in [9]. Here u = (u1, . . . , un−1, 1 − u1 − · · · − un−1) where (u1, . . . , un−1) ∈ En−1, du = du1 . . . dun−1, Z = ln(X) = (ln x1,..., ln xn), and x · y = x1y1 + ··· + xnyn is the dot product of two vectors x and y. Recently J. Merikowski [8] has proposed the following generalization of the Stolarsky mean Er,s to several variables

1 L(Xr) r−s (1.5) E (X) = r,s L(Xs)

r r r (r 6= s), where X = (x1, . . . , xn). In the paper cited above, the author did not prove that Er,s(X) is the mean of X, i.e., that

(1.6) Xmin ≤ Er,s(X) ≤ Xmax holds true. If n = 2 and rs(r − s) 6= 0 or if r 6= 0 and s = 0, then (1.5) simplifies to (1.1) in the stated cases. This paper deals with a two-parameter family of multivariate means whose prototype is given in (1.5). In order to define these means let us introduce more notation. By µ we will denote a probability measure on En−1. The logarithmic mean L(µ; X) with the underlying measure µ is

J. Inequal. Pure and Appl. Math., 6(2) Art. 30, 2005 http://jipam.vu.edu.au/ STOLARSKY MEANSOF SEVERAL VARIABLES 3

defined in [9] as follows Z n Z Y ui (1.7) L(µ; X) = xi µ(u) du = exp(u · Z)µ(u) du. En−1 i=1 En−1 We define  1 L(µ; Xr) r−s  , r 6= s  L(µ; Xs) (1.8) Er,s(µ; X) =   d  exp ln L(µ; Xr) , r = s.  dr

Let us note that for µ(u) = (n − 1)!, the Lebesgue measure on En−1, the first part of (1.8) simplifies to (1.5). In Section 2 we shall prove that Er,s(µ; X) is the mean value of X, i.e., it satisfies inequalities (1.6). Some elementary properties of this mean are also derived. Section 3 deals with the limit theorems for the new mean, with the probability measure being the Dirichlet measure. The n latter is denoted by µb, where b = (b1, . . . , bn) ∈ R+, and is defined as [2] n 1 Y (1.9) µ (u) = ubi−1, b B(b) i i=1 where B(·) is the multivariate beta function, (u1, . . . , un−1) ∈ En−1, and un = 1 − u1 − · · · − un−1. In the Appendix we shall prove that under certain conditions imposed on the parameters r−s r and s, the function Er,s (x, y) is strictly totally positive as a function of x and y.

2. ELEMENTARY PROPERTIESOF Er,s(µ; X)

In order to prove that Er,s(µ; X) is a mean value we need the following version of the Mean- Value Theorem for integrals.  Proposition 2.1. Let α := Xmin < Xmax =: β and let f, g ∈ C [α, β] with g(t) 6= 0 for all t ∈ [α, β]. Then there exists ξ ∈ (α, β) such that R f(u · X)µ(u) du f(ξ) (2.1) En−1 = . R g(u · X)µ(u) du g(ξ) En−1 Proof. Let the numbers γ and δ and the function φ be defined in the following way Z Z γ = g(u · X)µ(u) du, δ = f(u · X)µ(u) du, En−1 En−1 φ(t) = γf(t) − δg(t). Letting t = u · X and, next, integrating both sides against the measure µ, we obtain Z φ(u · X)µ(u) du = 0. En−1 On the other hand, application of the Mean-Value Theorem to the last integral gives Z φ(c · X) µ(u) du = 0, En−1

J. Inequal. Pure and Appl. Math., 6(2) Art. 30, 2005 http://jipam.vu.edu.au/ 4 EDWARD NEUMAN

where c = (c1, . . . , cn−1, cn) with (c1, . . . , cn−1) ∈ En−1 and cn = 1 − c1 − · · · − cn−1. Letting ξ = c · X and taking into account that Z µ(u) du = 1 En−1 we obtain φ(ξ) = 0. This in conjunction with the definition of φ gives the desired result (2.1). The proof is complete.  The author is indebted to Professor Zsolt Páles for a useful suggestion regarding the proof of Proposition 2.1. (p) For later use let us introduce the symbol Er,s (µ; X) (p 6= 0), where

1 (p)  p  p (2.2) Er,s (µ; X) = Er,s(µ; X ) . We are in a position to prove the following.

n Theorem 2.2. Let X ∈ R+ and let r, s ∈ R. Then (i) Xmin ≤ Er,s(µ; X) ≤ Xmax, (ii) Er,s(µ; λX) = λEr,s(µ; X), λ > 0, (λX := (λx1, . . . , λxn)), (iii) Er,s(µ; X) increases with an increase in either r and s, 1 (iv) ln E (µ; X) = R r ln E (µ; X) dt , r 6= s, r,s r − s s t,t (p) (v) Er,s (µ; X) = Epr,ps(µ; X), −1 −1 (vi) Er,s(µ; X)E−r,−s(µ; X ) = 1, (X := (1/x1,..., 1/xn)), s−r p−r s−p (vii) Er,s (µ; X) = Er,p (µ; X)Ep,s (µ; X). Proof of (i). Assume first that r 6= s. Making use of (1.8) and (1.7) we obtain 1 "R exp r(u · Z)µ(u) du# r−s E (µ; X) = En−1 . r,s R exp s(u · Z)µ(u) du En−1 Application of (2.1) with f(t) = exp(rt) and g(t) = exp(st) gives

1 " # r−s exp r(c · Z) E (µ; X) = = exp(c · Z), r,s exp s(c · Z)

where c = (c1, . . . , cn−1, cn) with (c1, . . . , cn−1) ∈ En−1 and cn = 1 − c1 − · · · − cn−1. Since c · Z = c1 ln x1 + ··· + cn ln xn, ln Xmin ≤ c · Z ≤ ln Xmax. This in turn implies that Xmin ≤ exp(c · Z) ≤ Xmax. This completes the proof of (i) when r 6= s. Assume now that r = s. It follows from (1.8) and (1.7) that "R (u · Z) exp r(u · Z)µ(u) du# ln E (µ; X) = En−1 . r,r R exp r(u · Z)µ(u) du En−1 Application of (2.1) to the right side with f(t) = t exp(rt) and g(t) = exp(rt) gives " # (c · Z) exp r(c · Z) ln E (µ; X) = = c · Z. r,r exp r(c · Z)

Since ln Xmin ≤ c · Z ≤ ln Xmax, the assertion follows. This completes the proof of (i). Proof of (ii). The following result (2.3) Lµ;(λx)r = λrL(µ; Xr)

J. Inequal. Pure and Appl. Math., 6(2) Art. 30, 2005 http://jipam.vu.edu.au/ STOLARSKY MEANSOF SEVERAL VARIABLES 5

(λ > 0) is established in [9, (2.6)]. Assume that r 6= s. Using (1.8) and (2.3) we obtain

1 λrL(µ; Xr) r−s E (µ; λx) = = λE (µ; X). r,s λsL(µ; Xs) r,s Consider now the case when r = s 6= 0. Making use of (1.8) and (2.3) we obtain  d  E (µ; λX) = exp ln Lµ;(λX)r r,r dr  d  = exp ln λrL(µ; Xr) dr  d  = exp r ln λ + ln L(µ; Xr) = λE (µ; X). dr r,r When r = s = 0, an easy computation shows that

n Y wi (2.4) E0,0(µ; X) = xi ≡ G(w; X), i=1 where Z (2.5) wi = uiµ(u) du En−1 (1 ≤ i ≤ n) are called the natural weights or partial moments of the measure µ and w = (w1, . . . , wn). Since w1 + ··· + wn = 1, E0,0(µ; λX) = λE0,0(µ; X). The proof of (ii) is complete. Proof of (iii). In order to establish the asserted property, let us note that the function r → exp(rt) is logarithmically convex (log-convex) in r. This in conjunction with Theorem B.6 in [2], implies that a function r → L(µ; Xr) is also log-convex in r. It follows from (1.8) that ln L(µ; Xr) − ln L(µ; Xs) ln E (µ; X) = . r,s r − s The right side is the divided difference of order one at r and s. Convexity of ln L(µ; Xr) in r implies that the divided difference increases with an increase in either r and s. This in turn implies that ln Er,s(µ; X) has the same property. Hence the monotonicity property of the mean Er,s in its parameters follows. Now let r = s. Then (1.8) yields d ln E (µ; X) =  ln L(µ; Xr). r,r dr Since ln L(µ; Xr) is convex in r, its with respect to r increases with an increase in r. This completes the proof of (iii). Proof of (iv). Let r 6= s. It follows from (1.8) that Z r Z r 1 1 d  t  ln Et,t(µ; X) dt = ln L(µ; X ) dt r − s s r − s s dt 1 =  ln L(µ; Xr) − ln L(µ; Xs) r − s = ln Er,s(µ; X).

J. Inequal. Pure and Appl. Math., 6(2) Art. 30, 2005 http://jipam.vu.edu.au/ 6 EDWARD NEUMAN

Proof of (v). Let r 6= s. Using (2.2) and (1.8) we obtain

1 1 L(Xpr) p(r−s) E (p)(µ; X) = E (µ; Xp) p = = E (µ; X). r,s r,s L(Xps) pr,ps Assume now that r = s 6= 0. Making use of (2.2), (1.8) and (1.7) we have 1 d  E (p)(µ; X) = exp ln L(µ; Xpr) r,r p dr  Z  1   = exp pr (u · Z) exp pr(u · Z) µ(u) du L(µ; X ) En−1

= Epr,pr(µ; X).

The case when r = s = 0 is trivial because E0,0(µ; X) is the weighted geometric mean of X. −1 −1 Proof of (vi). Here we use (v) with p = −1 to obtain Er,s(µ; X ) = E−r,−s(µ; X). Letting X := X−1 we obtain the desired result. Proof of (vii). There is nothing to prove when either p = r or p = s or r = s. In other cases we use (1.8) to obtain the asserted result. This completes the proof.  In the next theorem we give some inequalities involving the means under discussion.

Theorem 2.3. Let r, s ∈ R. Then the following inequalities

(2.6) Er,r(µ; X) ≤ Er,s(µ; X) ≤ Es,s(µ; X) are valid provided r ≤ s. If s > 0, then

(2.7) Er−s,0(µ; X) ≤ Er,s(µ; X). Inequality (2.7) is reversed if s < 0 and it becomes an equality if s = 0. Assume that r, s > 0 and let p ≤ q. Then

(p) (q) (2.8) Er,s (µ; X) ≤ Er,s (µ; X) with the inequality reversed if r, s < 0. Proof. Inequalities (2.6) and (2.7) follow immediately from Part (iii) of Theorem 2.2. For the proof of (2.8), let r, s > 0 and let p ≤ q. Then pr ≤ qr and ps ≤ qs. Applying Parts (v) and (iii) of Theorem 2.2, we obtain

(p) (q) Er,s (µ; X) = Epr,ps(µ; X) ≤ Eqr,qs(µ; X) = Er,s (µ; X). When r, s < 0, the proof of (2.8) goes along the lines introduced above, hence it is omitted. The proof is complete. 

3. THE MEAN Er,s(b; X) n An important probability measure on En−1 is the Dirichlet measure µb(u), b ∈ R+ (see (1.9)). Its role in the theory of special functions is well documented in Carlson’s monograph [2]. When µ = µb, the mean under discussion will be denoted by Er,s(b; X). The natural weights wi (see (2.5)) of µb are given explicitly by

(3.1) wi = bi/c

J. Inequal. Pure and Appl. Math., 6(2) Art. 30, 2005 http://jipam.vu.edu.au/ STOLARSKY MEANSOF SEVERAL VARIABLES 7

(1 ≤ i ≤ n), where c = b1+···+bn (see [2, (5.6-2)]). For later use we define w = (w1, . . . , wn). 2 n Recall that the weighted Dresher mean Dr,s(p; X) of order (r, s) ∈ R of X ∈ R+ with weights n p = (p1, . . . , pn) ∈ R+ is defined as  n 1 P p xr  r−s  i=1 i i , r 6= s  Pn s  i=1 pixi (3.2) Dr,s(p; X) =  Pn p xr ln x   i=1 i i i exp Pn r , r = s i=1 pixi (see, e.g., [1, Sec. 24]). In this section we present two limit theorems for the mean Er,s with the underlying mea- sure being the Dirichlet measure. In order to facilitate presentation we need a concept of the Dirichlet average of a function. Following [2, Def. 5.2-1] let Ω be a convex set in C and let n Y = (y1, . . . , yn) ∈ Ω , n ≥ 2. Further, let f be a measurable function on Ω. Define Z (3.3) F (b; Y ) = f(u · Y )µb(u) du. En−1

Then F is called the Dirichlet average of f with variables Y = (y1, . . . , yn) and parameters b = (b1, . . . , bn). We need the following result [2, Ex. 6.3-4]. Let Ω be an open circular disk in n C, and let f be holomorphic on Ω. Let Y ∈ Ω , c ∈ C, c 6= 0, −1,..., and w1 + ··· + wn = 1. Then n X (3.4) lim F (cw; Y ) = wif(yi), c→0 i=1 where cw = (cw1, . . . , cwn). We are in a position to prove the following.

n Theorem 3.1. Let w1 > 0, . . . , wn > 0 with w1 + ··· + wn = 1. If r, s ∈ R and X ∈ R+, then

lim Er,s(cw; X) = Dr,s(w; X). c→0+

Proof. We use (1.7) and (3.3) to obtain L(cw; X) = F (cw; Z), where Z = ln X = (ln x1,..., ln xn). Making use of (3.4) with f(t) = exp(t) and Y = ln X we obtain n X lim L(cw; X) = wixi. c→0+ i=1 Hence n r X r (3.5) lim L(cw; X ) = wixi . c→0+ i=1 Assume that r 6= s. Application of (3.5) to (1.8) gives

1 1  r  r−s Pn r  r−s L(cw; X ) i=1 wixi lim Er,s(cw; X) = lim = n = Dr,s(w; X). + + s P s c→0 c→0 L(cw; X ) i=1 wixi Let r = s. Application of (3.4) with f(t) = t exp(rt) gives n n X X r lim F (cw; Z) = wizi exp(rzi) = wi(ln xi)xi . c→0+ i=1 i=1

J. Inequal. Pure and Appl. Math., 6(2) Art. 30, 2005 http://jipam.vu.edu.au/ 8 EDWARD NEUMAN

This in conjunction with (3.5) and (1.8) gives   Pn r  F (cw; Z) i=1 wixi ln xi lim Er,r(cw; X) = lim exp = exp n = Dr,r(w; X). + + r P r c→0 c→0 L(cw; X ) i=1 wixi This completes the proof.  Theorem 3.2. Under the assumptions of Theorem 3.1 one has

(3.6) lim Er,s(cw; X) = G(w; X). c→∞ Proof. The following limit (see [9, (4.10)]) (3.7) lim L(cw; X) = G(w; X) c→∞ will be used in the sequel. We shall establish first (3.6) when r 6= s. It follows from (1.8) and (3.7) that 1 L(cw; Xr) r−s 1  r−s r−s lim Er,s(cw; X) = lim = G(w; X) = G(w; X). c→∞ c→∞ L(cw; Xs) Assume that r = s. We shall prove first that (3.8) lim F (cw; Z) =  ln G(w; X)G(w; X)r, c→∞ where F is the Dirichlet average of f(t) = t exp(rt). Averaging both sides of ∞ X rm f(t) = tm+1 m! m=0 we obtain ∞ X rm (3.9) F (cw; Z) = R (cw; Z), m! m+1 m=0 m+1 where Rm+1 stands for the Dirichlet average of the power function t . We will show that the series in (3.9) converges uniformly in 0 < c < ∞. This in turn implies further that as c → ∞, we can proceed to the limit term by term. Making use of [2, 6.2-24)] we obtain m+1 |Rm+1(cw; Z)| ≤ |Z| , m ∈ N,  where |Z| = max | ln xi| : 1 ≤ i ≤ n . By the Weierstrass M test the series in (3.9) converges uniformly in the stated domain. Taking limits on both sides of (3.9) we obtain with the aid of (3.4) ∞ X rm lim F (cw; Z) = lim Rm+1(cw; Z) c→∞ m! c→∞ m=0 ∞ n !m+1 X rm X = w z m! i i m=0 i=1 ∞ m X r m =  ln G(w; X)  ln G(w; X) m! m=0 ∞ X 1 m =  ln G(w; X)  ln G(w; X)r m! m=0 =  ln G(w; X)G(w; X)r.

J. Inequal. Pure and Appl. Math., 6(2) Art. 30, 2005 http://jipam.vu.edu.au/ STOLARSKY MEANSOF SEVERAL VARIABLES 9

This completes the proof of (3.8). To complete the proof of (3.6) we use (1.8), (3.7), and (3.8) to obtain F (cw; Z)  ln G(w; X)G(w; X)r lim ln Er,r(µ; X) = lim = = ln G(w; X). c→∞ c→∞ L(cw; Xr) G(w; X)r Hence the assertion follows.  r−s APPENDIX A. TOTAL POSITIVITYOF Er,s (x, y) A real-valued function h(x, y) of two real variables is said to be strictly totally positive on its domain if every n × n determinant with elements h(xi, yj), where x1 < x2 < ··· < xn and y1 < y2 < ··· < yn is strictly positive for every n = 1, 2,... (see [5]). r−s The goal of this section is to prove that the function Er,s (x, y) is strictly totally positive as a function of x and y provided the parameters r and s satisfy a certain condition. For later use we 0 recall the definition of the R-hypergeometric function R−α(β, β ; x, y) of two variables x, y > 0 with parameters β, β0 > 0 0 Z 1 0 Γ(β + β ) β−1 β0−1 −α (A1) R−α(β, β ; x, y) = 0 u (1 − u) ux + (1 − u)y du Γ(β)Γ(β ) 0 (see [2, (5.9-1)]). r−s Proposition A.1. Let x, y > 0 and let r, s ∈ R. If |r| < |s|, then Er,s (x, y) is strictly totally 2 positive on R+. Proof. Using (1.3) and (A1) we have r−s s s (A2) E (x, y) = R r−s (1, 1; x , y ) r,s s 0 (s(r − s) 6= 0). B. Carlson and J. Gustafson [3] have proven that R−α(β, β ; x, y) is strictly totally positive in x and y provided β, β0 > 0 and 0 < α < β + β0. Letting α = 1 − r/s, 0 s s β = β = 1, x := x , y := y , and next, using (A2) we obtain the desired result. 

Corollary A.2. Let 0 < x1 < x2, 0 < y1 < y2 and let the real numbers r and s satisfy the inequality |r| < |s|. If s > 0, then

(A3) Er,s(x1, y1)Er,s(x2, y2) < Er,s(x1, y2)Er,s(x2, y1). Inequality (A3) is reversed if s < 0. r−s  Proof. Let aij = Er,s (xi, yj) (i, j = 1, 2). It follows from Proposition A.1 that det [aij] > 0 provided |r| < |s|. This in turn implies  r−s  r−s Er,s(x1, y1)Er,s(x2, y2) > Er,s(x1, y2)Er,s(x2, y1) . Assume that s > 0. Then the inequality |r| < s implies r − s < 0. Hence (A3) follows when s > 0. The case when s < 0 is treated in a similar way. 

REFERENCES

[1] E.F. BECKENBACH AND R. BELLMAN, Inequalities, Springer-Verlag, Berlin, 1961. [2] B.C. CARLSON, Special Functions of Applied Mathematics, Academic Press, New York, 1977.

[3] B.C. CARLSON AND J.L. GUSTAFSON, Total positivity of mean values and hypergeometric func- tions, SIAM J. Math. Anal., 14(2) (1983), 389–395.

[4] P. CZINDER AND ZS. PÁLES An extension of the Hermite-Hadamard inequality and an appli- cation for Gini and Stolarsky means, J. Ineq. Pure Appl. Math., 5(2) (2004), Art. 42. [ONLINE: http://jipam.vu.edu.au/article.php?sid=399].

J. Inequal. Pure and Appl. Math., 6(2) Art. 30, 2005 http://jipam.vu.edu.au/ 10 EDWARD NEUMAN

[5] S. KARLIN, Total Positivity, Stanford Univ. Press, Stanford, CA, 1968.

[6] E.B. LEACH AND M.C. SHOLANDER, Extended mean values, Amer. Math. Monthly, 85(2) (1978), 84–90.

[7] E.B. LEACH AND M.C. SHOLANDER, Multi-variable extended mean values, J. Math. Anal. Appl., 104 (1984), 390–407. [8] J.K. MERIKOWSKI, Extending means of two variables to several variables, J. Ineq. Pure Appl. Math., 5(3) (2004), Art. 65. [ONLINE: http://jipam.vu.edu.au/article.php?sid= 411]. [9] E. NEUMAN, The weighted logarithmic mean, J. Math. Anal. Appl., 188 (1994), 885–900.

[10] E. NEUMAN AND ZS. PÁLES, On comparison of Stolarsky and Gini means, J. Math. Anal. Appl., 278 (2003), 274–284.

[11] E. NEUMAN, C.E.M. PEARCE, J. PECARIˇ C´ AND V. ŠIMIC,´ The generalized Hadamard inequal- ity, g-convexity and functional Stolarsky means, Bull. Austral. Math. Soc., 68 (2003), 303–316.

[12] E. NEUMAN AND J. SÁNDOR, Inequalities involving Stolarsky and Gini means, Math. Pannon- ica, 14(1) (2003), 29–44.

[13] ZS. PÁLES, Inequalities for differences of powers, J. Math. Anal. Appl., 131 (1988), 271–281.

[14] C.E.M. PEARCE, J. PECARIˇ C´ AND V. ŠIMIC,´ Functional Stolarsky means, Math. Inequal. Appl., 2 (1999), 479–489.

[15] J. PECARIˇ C´ AND V. ŠIMIC,´ The Stolarsky-Tobey mean in n variables, Math. Inequal. Appl., 2 (1999), 325–341. [16] K.B. STOLARSKY, Generalizations of the logarithmic mean, Math. Mag., 48(2) (1975), 87–92. [17] K.B. STOLARSKY, The power and generalized logarithmic means, Amer. Math. Monthly, 87(7) (1980), 545–548. [18] M.D. TOBEY, A two-parameter homogeneous mean value, Proc. Amer. Math. Soc., 18 (1967), 9–14.

J. Inequal. Pure and Appl. Math., 6(2) Art. 30, 2005 http://jipam.vu.edu.au/