arXiv:1710.03637v2 [math.PR] 2 Sep 2018 vlaeteifiieseries infinite the evaluate n ftems eertdpolm nCasclAayi sthe is Analysis Classical in problems celebrated most the of One Introduction 1 itoMnoiiiilypsdti rbe n14,adElr[]wsthe was [8] Euler and 1644, in problem answer this posed initially Mengoli Pietro rbe ffidn h sums: the finding of problem f techniques with variables. ones complex records and [11] Analysis Kalman Fourier example, For literature. nti ril,w eeaiesltosivligmlil nerto ot to integration multiple involving solutions generalize we article, this In ruet ewe h eodadtidgeneralizations. equal third the and pr is highlight second a it the specifically as between and we arguments expressed variables, Thus, be random generalization. can these second of integral of function This products density joint pairwise polytope. the of hyperbolic integral a an is variables, of s obntra ruet ofidtevlm,wihi turn in which whi volume, polytope, the convex find a for to arguments of combinatorial volume of use sums the pairwise is certain involving variables, probability of change a ecndrcl eaeto s a relate encounter directly we function, can this we seeking In variables. random h rtitga stedniyfnto fteqoin of quotient the of function density the is integral first the eeaieec nerli re ofidtecniee sums considered the find to order in integral each generalize is,w tr ihtredffrn obeitgasta ha that integrals inte double show multiple different to three with literature with sums start these we evaluate First, We integer. positive π ecnie h sums the consider We S 2 / ( 6 k . and ) vlaino amncSm ihIntegrals with Sums Harmonic of Evaluation ic ue’ ie oee,mr ouin otepolmhv appe have problem the to solutions more however, time, Euler’s Since ζ (2 k ) . S The 2 = (2) ie asi n ail Ritelli Daniele and Kaushik Vivek k S S iesoa nlgeo h atitga,uo nte cha another upon integral, last the of analogue dimensional ( π ζ k ( S k (2 2 ) / . = ) ( k 8 k The , = ) = ) etme ,2018 5, September hc mle ue’ identity Euler’s implies which P k n n X X ζ n ∞ ∞ ∞ =1 =0 2 = (2) =0 iesoa nlgeo h eoditga,upon integral, second the of analogue dimensional Abstract n (2 (2 ( 1 ( 2 − n n − k 1 1) +1) n X k , k 1) 1) + ∞ nk =1 needn nfr admvrals We variables. random uniform independent k nk n 1 and k 2 ∈ k , . N ζ , k (2 needn,nneaieCauchy nonnegative independent, ∈ k k iiaiisi h combinatorial the in similarities The . = ) eillgrtmcitga that integral logarithmic pecial ebe rvosyue nthe in used previously been ve N acyrno aibe over variables random Cauchy rto,amdr technique. modern a gration, , otepoaiiyfo the from probability the to bblt novn certain involving obability P ie e lsdformulas closed new gives hcnb xrse sa as expressed be can ch k ζ n ∞ iesoa nlgeof analogue dimensional ae Problem Basel 2 = (2) o ieetaessc as such areas different rom =1 n 1 2 rtt ietecorrect the give to first k π with 2 / ecoeyrelated closely he 6 . k hn we Then, hc sto is which , en a being rdi the in ared nge (1.1) (1.2) (1.3) with the latter sum being the Riemann Zeta Function evaluated at the positive even integers. Apostol [1] pioneered this approach on ζ(2). He evaluates
1 1 1 dx dy (1.4) 1 xy Z0 Z0 − in two ways: first, by converting the integrand into a geometric series and exchanging summation and integration to obtain ζ(2), and then using the linear change of variables u + v u v x = , y = − √2 √2 to obtain π2/6 upon evaluating arctangent integrals. We focus on three different double integrals as our starting point:
∞ ∞ y D = dx dy, (1.5) 1 (x2y2 + 1)(y2 + 1) Z0 Z0 whose variations appear in [3, 10, 14, 15, 16],
1 1 1 D = dx dy, (1.6) 2 1 x2y2 Z0 Z0 − which appears in [2, 7], and
1 1 1 D3 = dx dy, (1.7) √xy(1 xy) Z0 Z0 − which appears in [12, p. 9]. Each integral is used to show
∞ 1 π2 S(2) = = , (1.8) (2n + 1)2 8 n=0 X from which we can algebraically obtain ζ(2) = π2/6. 2 First, D1 evaluates to π /4, but upon reversing the order of integration, it becomes
∞ ln(x) dx, (1.9) x2 1 Z0 − which we show is 2S(2). We generalize D1, considering the density function of the quotient of k Cauchy random variables expressed as a multiple integral. Our approach differs from that of Bourgade, Fujita and Yor [3], who also use these same random variables. We evaluate this multiple integral by repeatedly reversing the order of integration and using partial fractions several times. The result is we obtain a logarithmic integral generalizing (1.9), which we can relate directly back to S(k). We then modify our approach to find special closed forms to the bilateral alternating series
∞ ( 1)nk 1 S(k,a)= − , a> 1, / N, k N, (1.10) (an + 1)k a ∈ ∈ n= X−∞ of which the special case k = 2 is examined in [5].
2 Next, D2, similar to (1.4), can be evaluated in two ways. First, we convert the integrand into a geometric series and exchange summation and integration to obtain S(2). Then, using Calabi’s trigonometric change of variables [2] (the hyperbolic version provided in [6, 13, 17, 18]) sin(u) sin(v) x = , y = , (1.11) cos(v) cos(u) whose Jacobian determinant is ∂(x,y) = 1 x2y2, we see D is the area of a right isosceles ∂(u,v) − 2 triangle with the shorter legs each of length π/2. Hence, D = π2/8. The generalized version of 2 this approach leads us to a convex polytope
∆k = (u ,...,u ) Rk : u + u < 1, u > 0, 1 i k , (1.12) { 1 k ∈ i i+1 i ≤ ≤ } k in which we use cyclical indexing mod k : uk+1 := u1. The volume of ∆ , which is equal to (2/π)kS(k), has already been computed in three different ways: Beukers, Calabi, and Kolk [2] dissect ∆k into pairwise disjoint congruent pyramids, Elkies [7] and Silagadze [18] both perform spectral analysis on its characteristic function, and Stanley [20] uses properties of alternating permutations. We give another approach to the volume computation, viewing it as the probability that k independent uniform random variables on (0, 1) have cyclically pairwise consecutive sums less than 1. We use combinatorial arguments to evaluate the probability. Our approach leads to new and interesting closed formulas of S(k) and ζ(2k) that do not invoke the traditional Bernoulli and Eulerian numbers. Finally, upon a substitution x = √u,y = √v to D2, we see that D3 = 4S(2). On the other hand, Zagier’s and Kontsevich’s change of variables [12, p. 9]
ξ2(η2 + 1) η2(ξ2 + 1) x = , y = , (1.13) ξ2 +1 η2 +1
∂(x,y) 4√xy(1 xy) − which has Jacobian Determinant ∂(ξ,η) = (ξ2+1)(η2+1) , transforms D3 into
4 dη dξ, ξη<1, (ξ2 + 1)(η2 + 1) ZZξ,η>0 whose value is π2/2. We generalize this approach, which leads us to integrating the joint density function of k independent, nonnegative Cauchy random variables over a hyperbolic polytope:
Hk = (ξ ,...,ξ ) Rk : ξ ξ < 1, ξ > 0, 1 i k , (1.14) { 1 k ∈ i i+1 i ≤ ≤ } in which we use cyclical indexing mod k : ξk+1 := ξ1. This is the same as the probability that k independent, nonnegative Cauchy random variables have cyclically pairwise consecutive products less than 1. Combinatorial analysis of this probability leads to the exact same closed formulas from the second approach, but we highlight the similarities between this analysis and that of our second approach. Hence, this approach, along with the second, induces two equivalent probabilistic viewpoints of S(k) and ζ(2k). Finally, it is worth noting that ζ(2k) can be obtained from S(2k) by observing 1 ζ(2k)= ζ(2k)+ S(2k), 22k which implies 22k ζ(2k)= S(2k). (1.15) 22k 1 − 3 1.1 Some Concepts From Probability Theory We briefly recall some notions from probability theory that we use throughout this article. Let A1,...,Ak be k events. We denote Pr(A1,...,Ak) to be the probability that A1,...,Ak occur at the same time. Let X1,...,Xk be k continuous, nonnegative random variables. Their joint density function fX1,...,Xk (x1,...,xk) is a nonnegative function such that
∞ ∞ f (x ,...,x ) dx . . . dx =1, (1.16) ··· X1,...,Xk 1 k 1 k Z0 Z0 and for all a , b ,...,a , b 0, we have 1 1 k k ≥ bk b1 Pr (a X b ,...,a X b )= f (x ,...,x ) dx . . . dx . (1.17) 1 ≤ 1 ≤ 1 k ≤ k ≤ k ··· X1,...,Xk 1 k 1 k Zak Za1
The joint cumulative distribution function of X1,...,Xk is the function
F (x ,...,x ) = Pr(X x ,...,X x ). (1.18) X1 ,...,Xk 1 k 1 ≤ 1 k ≤ k
In the case there is one random variable X1, we simply refer to fX1 (x1) as its density function and FX1 (x1) as its cumulative distribution function. It follows from the Fundamental Theorem of Calculus that ∂k FX1,...,Xk = fX1,...,Xk , (1.19) ∂x1 ...∂xk and in the case of only one variable X1, we have FX′ 1 = fX1 . If X1,...,Xk are independent, then we have
FX1 ,...,Xk (x1,...xk)= FX1 (x1) ...FXk (xk), (1.20) and from (1.19), we have
fX1,...,Xk (x1,...xk)= fX1 (x1) ...fXk (xk). (1.21)
Let X and Y be independent, nonnegative random variables with density functions fX (x) and fY (y), respectively. Their product Z = XY is a new random variable with density function
∞ 1 z fZ (z)= fY fX (x) dx. (1.22) 0 x x Z If X =0, their quotient T = Y/X has density function 6 ∞ fT (t)= xfY (tx)fX (x) dx. (1.23) Z0 Random variable operations are studied in Springer’s book [19]. Finally, the nonnegative random variable X is said to be Cauchy if 2 1 f (x)= , (1.24) X π x2 +1 and X is said to be uniform on (a,b) with a
We first give a slightly modified version of the solution to the Basel Problem given by Pace [15] (see also [3]), which we will then generalize. Using the results of our generalization, we highlight two specific applications, in which we show S(3) = π3/32 and S(4) = π4/96. The latter result gives a new proof of ζ(4) = π4/90 with multiple integration. Finally, we extend our generalization to find S(k,a) and provide closed formulas for the cases k =2 and 3.
2.1 Luigi Pace’s Solution to the Basel Problem
Let X1 and X2 be independent, nonnegative Cauchy random variables. Define
Z = X1/X2.
We compute the density function fZ (z). By (1.23) and (1.24), we have
4 ∞ x f (z)= 2 dx . (2.1) Z π2 (z2x2 + 1)(x2 + 1) 2 Z0 2 2 To evaluate (2.1), we use the partial fractions identity
x xy2 x = . (2.2) (x2y2 + 1)(x2 + 1) (x2y2 + 1)(y2 1) − (x2 + 1)(y2 1) − − Hence, 4 ln(z) f (z)= . Z π2 z2 1 − Using (1.16) and then rearranging terms, we have
2 ∞ ln(z) π dz = . (2.3) z2 1 4 Z0 − Following the argument of [16], we write
1 ∞ ln(z) ln(z) ∞ ln(z) = dz + dz (2.4) z2 1 z2 1 z2 1 Z0 − Z0 − Z1 − 1 ln(z) =2 dz, (2.5) z2 1 Z0 − in which (2.5) follows from a substitution z =1/u to the second term on the right hand side of (2.4). We expand the integrand of (2.5) into a geometric series
ln(z) ∞ = z2n ln(z). (2.6) z2 1 − n=0 − X Putting the right hand side of (2.6) in place of the integrand in (2.5), we have (2.5) is equal to
1 ∞ ∞ 1 2 z2n ln(z) dz = 2 z2n ln(z) dz (2.7) − − 0 n=0 n=0 0 Z X X Z ∞ 1 =2 , (2.8) (2n + 1)2 n=0 X
5 by which (2.7) follows by the Monotone Convergence Theorem (see [4, p. 95-96]), and (2.8) follows upon integration by parts. Hence, we have
π2 S(2) = . 8 Finally, using (1.15), we obtain 4 π2 ζ(2) = S(2) = . 3 6 Thus, the Basel Problem is solved.
2.2 Generalization for S(k) We now give the generalization of Pace’s solution, as well as all the solutions presented in [10, 14, 15, 16]. Let X1,...,Xk be k independent, nonnegative Cauchy random variables with k 2. Define the quotient ≥ Zk = X1/.../Xk.
We seek the density function fZk (z). Theorem 2.2.1. The density function fZk (z) is
k 2 ∞ ∞ x ...x f (z)= 2 k dx . . . dx . (2.9) Zk πk ··· (z2x2 ...x2 + 1)(x2 + 1) ... (x2 + 1) 2 k Z0 Z0 2 k 2 k Proof. The case k = 2 was already computed in (2.1). Let the statement hold for k = m with m > 2. We show it must also hold for k = m +1. Applying (1.23) to Zm+1 = Zm/Xm+1, we have
2 ∞ fZm+1 (z)= xm+1fZm (zxm+1)fXm+1 (xm+1) dxm+1 π 0 mZ+1 2 ∞ ∞ x ...x x = 2 m m+1 dx . . . dx , (2.10) πm+1 ··· (z2x2 ...x2 + 1)(x2 + 1) ... (x2 + 1) 2 m+1 Z0 Z0 2 m+1 2 m+1 in which (2.10) is the result of the inductive hypothesis on fZm (z). Note that the integrand of (2.10) is nonnegative. Thus, Tonelli’s Theorem allows us to reverse the order of integration on the integral
∞ fZm+1 (z) dz. (2.11) Z0 Integrating (2.11) with respect to z first, and then with respect to each of the other m variables, we see (2.11) is equal to 1, satisfying (1.16).
Remark. Theorem 2.2.1 still holds if Zk is alternatively formulated by the quotient of X1 and the product X2 ...Xk. In this case, we would need to use (1.22) and (1.23). The crux of our generalization is evaluating (2.9), which involves reversing the order of inte- gration several times and using a generalized partial fractions identity
x xy2 x = , s N. (2.12) (x2y2 ( 1)s)(x2 + 1) (x2y2 ( 1)s)(y2 ( 1)s) − (x2 + 1)(y2 ( 1)s) ∈ − − − − − − − −
6 Along the way, we encounter integrals of the form
k ∞ ln (z) dz, (2.13) z2 ( 1)k Z0 − − which vanish, according to Gradshteyn and Rhyzik table entries [9, Section 4.271: 7 and 9]. The end result is that we arrive at a logarithmic integral of the form
k 1 ∞ ln − (z) J = dz. (2.14) k z2 ( 1)k Z0 − − By splitting the region of integration in the same way as in (2.4) and making a substitution z =1/u to the resulting second term, we find that
1 k 1 ln − (z) J =2 dz. (2.15) k z2 ( 1)k Z0 − − From entries [9, Section 4.271: 6 and 10], we have
22k 2k π k − Bk k even J = k 2 | | . (2.16) k π k Ek 1 k odd 2 | − | Here, Bm and Em denote the Bernoulli number and Eulerian number of order m, respectively. These numbers satisfy
x ∞ B 1 ∞ E = m xm, = m xm. ex 1 m! cosh(x) m! m=0 m=0 − X X It is worth noting Bm and Em can be alternatively defined through the generating functions of tan(x) and sec(x), respectively (see [20]). Finally, we convert the integrand in (2.15) into a geometric series k 1 ln − (z) ∞ k 1 nk 2n = ln − (z)( 1) z . (2.17) z2 ( 1)k − − n=0 − − X We put the series in (2.17) in place of the original integrand in (2.15), then exchange summation and integration as permitted by the Monotone Convergence Theorem, and finally use repeated integration by parts to obtain the result J S(k)= k . (2.18) 2(k 1)! − 2.3 Evaluating S(3) and S(4) We use our density function results to evaluate S(3) and S(4), with the latter giving ζ(4) = π4/90. Theorem 2.3.1. The value of S(3) is
∞ ( 1)n π3 S(3) = − = . (2n + 1)3 32 n=0 X
7 Proof. Let X1,X2, and X3 be independent, nonnegative Cauchy random variables, and let Z3 be their quotient. Then, by Theorem 2.2.1, we have the density function
8 ∞ ∞ x x f (z)= 2 3 dx dx . Z3 π3 (z2x2x2 + 1)(x2 + 1)(x2 + 1) 2 3 Z0 Z0 2 3 2 3 Using the partial fractions identity in (2.12), we have
8 ∞ ln(zx ) f (z)= 3 dx Z3 π3 (z2x2 1)(x2 + 1) 3 Z0 3 − 3 8 ∞ x ln(z) 8 ∞ x ln(x ) = 3 + 3 3 dx . (2.19) π3 (z2x2 1)(x2 + 1) π3 (z2x2 1)(x2 + 1) 3 Z0 3 − 3 Z0 3 − 3 Using (2.12) again on the first term of (2.19), we see
2 8 ln (z) 8 ∞ x ln(x ) f (z)= + 3 3 dx . (2.20) Z3 π3 z2 +1 π3 (z2x2 1)(x2 + 1) 3 Z0 3 − 3 Now, by (1.16), we have
8 8 ∞ ∞ x ln(x ) 1= J + 3 3 dx dz. (2.21) π3 3 π3 (z2x2 1)(x2 + 1) 3 Z0 Z0 3 − 3 Reversing the order of integration on the second term in (2.21), we encounter an integral of the form examined in (2.13), so this term vanishes. Thus, rearranging the terms in (2.21) gives
π3 J = . 3 8 Applying (2.18), we find
∞ ( 1)n 1 π3 π3 S(3) = − = = . (2n + 1)3 2(2!) 8 32 n=0 X
Remark. Our result for J agrees with (2.16), since E =1. 3 | 2| We now find S(4) and ζ(4). Theorem 2.3.2. The values of S(4) and ζ(4) are
∞ 1 π4 S(4) = = (2n + 1)4 96 n=0 X ∞ 1 π4 ζ(4) = = . n4 90 n=1 X Proof. Let X1,...,X4 be independent, nonnegative Cauchy random variables. Defining Z4 as their quotient, Theorem 2.2.1 implies
4 16 ∞ ∞ ∞ 1 x f (z)= i dx dx dx . Z4 π4 z2x2x2x2 +1 x2 +1 2 3 4 0 0 0 2 3 4 i=2 i Z Z Z Y
8 Applying the partial fractions identity from (2.12), we have
16 ∞ ∞ x x ln(zx x ) f (z)= 3 4 3 4 dx dx , Z4 π4 (z2x2x2 1)(x2 + 1)(x2 + 1) 3 4 Z0 Z0 3 4 − 3 4 which splits into three terms:
16 ∞ ∞ x x ln(z) H = 3 4 dx dx , (2.22) 1 π4 (z2x2x2 1)(x2 + 1)(x2 + 1) 3 4 Z0 Z0 3 4 − 3 4 16 ∞ ∞ x x ln(x ) H = 3 4 3 dx dx , (2.23) 2 π4 (z2x2x2 1)(x2 + 1)(x2 + 1) 3 4 Z0 Z0 3 4 − 3 4 16 ∞ ∞ x x ln(x ) H = 3 4 4 dx dx . (2.24) 3 π4 (z2x2x2 1)(x2 + 1)(x2 + 1) 3 4 Z0 Z0 3 4 − 3 4
We first evaluate H1. Using (2.12), we see
2 16 ∞ x ln (z) 16 ∞ x ln(z) ln(x ) H = 4 + 4 4 dx . 1 π4 (z2x2 + 1)(x2 + 1) π4 (z2x2 + 1)(x2 + 1) 4 Z0 4 4 Z0 4 4 Using (2.12) on the first term on the right hand side, we have
3 16 ln (z) 16 ∞ x ln(z) ln(x ) H = + 4 4 dx . 1 π4 z2 1 π4 (z2x2 + 1)(x2 + 1) 4 − Z0 4 4
Next, for H2, we reverse the order of integration and use (2.12) to get
16 ∞ x ln(x ) H = 3 3 dx . 2 π4 (z2x2 + 1)(x2 + 1) 3 Z0 3 3
Similarly, for H3, we get
16 ∞ x ln(x ) H = 4 4 dx . 3 π4 (z2x2 + 1)(x2 + 1) 4 Z0 4 4 Using (1.16), we have
16 16 ∞ ∞ x ln(z) ln(x ) 1= J + 4 4 dx dz π4 4 π4 (z2x2 + 1)(x2 + 1) 4 Z0 Z0 4 4 16 ∞ ∞ x ln(x ) + 3 3 dx dz π4 (z2x2 + 1)(x2 + 1) 3 Z0 Z0 3 3 16 ∞ ∞ x ln(x ) + 4 4 dx dz. (2.25) π4 (z2x2 + 1)(x2 + 1) 4 Z0 Z0 4 4 The third term and fourth terms on the right hand side of (2.25) vanish, which is seen upon reversing the order of integration in each integral and observing each inner integral is of the form in (2.13). To evaluate the second term, we make the substitution z = t/x4, which transforms it into
2 16 ∞ ∞ ln(t) ln(x ) 16 ∞ ∞ ln (x ) 4 dt dx 4 dt dx . (2.26) π4 (t2 + 1)(x2 + 1) 4 − π4 (t2 + 1)(x2 + 1) 4 Z0 Z0 4 Z0 Z0 4
9 The first term of (2.26) is of the same form seen in (2.13), but we recognize the second term is 3 8J3/π = 1. Thus, we see − − 16 1= J 1, π4 4 − so rearranging terms gives π4 J = . 4 8 By (2.18), we have ∞ 1 1 π4 π4 S(4) = = = . (2n + 1)4 2(3!) 8 96 n=0 X Finally, from (1.15), we have
∞ 1 16 π4 π4 ζ(4) = = = . n4 15 96 90 n=1 X
2.4 Further Generalization to S(k, a) We recall from the alternating series from (1.10), which is
∞ ( 1)nk 1 S(k,a)= − , a> 1, / N, k N. (an + 1)k a ∈ ∈ n= X−∞
We consider k independent, nonnegative random variables X1,...,Xk, each with density function a π 1 sin a i =1 π a xi +1 fXi (xi)= , (2.27) 2 1 − a 2 π xi sin 2 2 i k π a xi +1 ≤ ≤ where a has the same conditions that we imposed on S(k,a). Theorem 2.4.1. All of fX1 (x1),...,fXk (xk) are valid density functions. Proof. The crux of the proof is the application of Euler’s Reflection Formula:
Γ(t)Γ(1 t)= π csc(πt), t> 0, t/ N, (2.28) − ∈ where ∞ t 1 x Γ(t)= x − e− dx. Z0
10 For i =1, we see
∞ a π ∞ 1 fX1 (x1) dx1 = sin a dx1 0 π a 0 x1 +1 Z Z a π ∞ ∞ y(1+xa) = sin e− 1 dy dx1 (2.29) π a 0 0 Z Z 1 a y a π ∞ 1 1 z ∞ y e− = sin t a − e− dt dy (2.30) π a 0 0 a Z Z a π Γ 1 Γ 1 1 = sin a − a π a a =1,
1/a by which we made the substitution x1 = (t/y) on (2.29) to obtain (2.30). For i> 1, we have
1 2 − a ∞ 2 π ∞ xi fXi (xi) dxi = sin 2 dxi 0 π a 0 xi +1 Z Z 2 π ∞ ∞ 1 2 2 − a y(1+xi ) = sin xi e− dy dxi (2.31) π a 0 0 Z Z 2 π Γ( 1 )Γ(1 1 ) = sin a − a (2.32) π a 2 =1, in which (2.32) follows from the substitution xi = t/y on (2.31).
Remark. When a =2, we see X1,...,Xk are Cauchy.p Define the random variable for k 2, ≥ 2 Zk,a = X1/(X2 ...Xk) a .
We seek the density function fZk,a (z). Theorem 2.4.2. The density function fZk,a (z) is
π k a 2 sin ∞ ∞ x ...x f (z)= a ... 2 k dx . . . dx . (2.33) Zk,a 2πk (zax2 ...x2 + 1)(x2 + 1) ... (x2 + 1) 2 k Z0 Z0 2 k 2 k Proof. We start with the cumulative distribution function
F (z) = Pr(Z z). (2.34) Zk,a k,a ≤ Expanding the right hand side of (2.34), we find
2 Pr(Z z)=Pr X z(X ...X ) a k,a ≤ 1 ≤ 2 k 2 a = FX1 z(X2 ...Xk)
2 k 2 π z(x2...xk) a 1 a 2 sin ∞ ∞ (x ...x ) a = a 2 k − dx dx . . . dx . 2πk ··· (xa + 1)(x2 + 1) ... (x2 + 1) 1 2 k Z0 Z0 Z0 1 2 k
11 Thus, by (1.19), we have
π k a 2 sin ∞ ∞ x ...x f (z)= a ... 2 k dx . . . dx . Zk,a 2πk (zax2 ...x2 + 1)(x2 + 1) ... (x2 + 1) 2 k Z0 Z0 2 k 2 k Now, we consider ∞ fZk,a (z) dz, (2.35) Z0 whose integrand is nonnegative. Hence, Tonelli’s Theorem allows us to reverse the order of integration in (2.35). We integrate with respect to z first and then with respect to the other 2/a k 1 variables. Making the substitution z = t(x2 ...xk)− , and then applying Theorem 2.4.1 several− times, we see (2.35) is equal to 1, satisfying (1.16). Remark. When a =2, we recover fZk (z) from (2.9). In simplifying the density function when k> 2, we encounter integrals of the form
m 1 ∞ t − π mπ n dt = cot , m < n, (2.36) 0 t 1 −n n Z − as listed in Gradshteyn and Rhyzik entry [9, Section 3.241: 4]. At the end, we arrive at the logarithmic integral k 1 ∞ ln − (z) J = dz, (2.37) k,a za ( 1)k Z0 − − which generalizes Jk from (2.15). We now highlight the relation between Jk,a and S(k,a). Theorem 2.4.3. The following relation holds. J S(k,a)= k,a . (2.38) (k 1)! − Proof. We write
1 k 1 k 1 ln − (z) ∞ ln − (z) J = dz + dz (2.39) k,a za ( 1)k za ( 1)k Z0 − − Z1 − − 1 k 1 1 k 1 ln − (z) ln − (u) 1 = dz + du (2.40) za ( 1)k u a ( 1)k u2 Z0 − − Z0 − − − ∞ ( 1)nk ∞ ( 1)nk = (k 1)! − + (k 1)! − (2.41) − (an + 1)k − (1 an)k n=0 n=1 X X − 1 ∞ ( 1)nk − ( 1)nk = (k 1)! − + (k 1)! − − (an + 1)k − (an + 1)k n=0 n= X X−∞ = (k 1)!S(k,a), − in which (2.40) is the result of a substitution z = 1/u performed on the second term in (2.39), and (2.41) is the result of converting each integrand in (2.40) into a geometric series, exchanging summation and integration, and finally integrating by parts.
12 2.5 Evaluating S(2, a) and S(3, a) With this machinery, we seek the closed forms of S(2,a) and S(3,a), which have manageable calculations. The former sum is a standard complex variables result. Specific examples of S(2,a) for different a are provided in [5]. Theorem 2.5.1. S(2,a) is given by the formula
∞ 1 π2 π S(2,a)= = csc2 . (2.42) (an + 1)2 a2 a n= X−∞
Proof. Let X1 and X2 be independent, nonnegative random variables defined in (2.27), and let 2/a Z2,a = X1/(X2) . Then,
2 π 2a sin ( ) ∞ x f (z)= a 2 dx , Z2,a π2 (zax2 + 1)(x2 + 1) 2 Z0 2 2 from Theorem 2.4.2. 2 Observing za = za/2 , we can use (2.12) to see that