In nitely divisible laws asso ciated with hyp erb olic

functions

 y

Jim Pitman and Marc Yor

Technical Rep ort No. 581

Department of

University of California

367 Evans Hall  3860

Berkeley,CA94720-3860

Octob er 2000. Revised February 21, 2001



Dept. Statistics, U. C., Berkeley. Both authors supp orted in part by N.S.F. Grant DMS-0071448

y

Lab oratoire de Probabilit es, Universit e Pierre et Marie Curie, 4 Place Jussieu F-75252 Paris Cedex

05, France. 1

Abstract

+

The in nitely divisible distributions on R of random variables C , S and T

t t t

with Laplace transforms

 !  !

p p

t t

t

1 2 tanh 2

p p p

; ; and

cosh 2 sinh 2 2

resp ectively are characterized for various t>0inanumb er of di erentways: by

simple relations b etween their moments and cumulants, by corresp onding relations

between the distributions and their L evy measures, by recursions for their Mellin

transforms, and by di erential equations satis ed by their Laplace transforms.

Some of these results are interpreted probabilistically via known app earances of

these distributions for t = 1 or 2 in the description of the laws of various func-

tionals of Brownian motion and Bessel pro cesses, such as the heights and lengths

of excursions of a one-dimensional Brownian motion. The distributions of C

1

and S are also known to app ear in the Mellin representations of two imp ortant

2

functions in analytic numb er theory, the Riemann zeta function and the Dirichlet

L-function asso ciated with the quadratic character mo dulo 4. Related families of

in nitely divisible laws, including the gamma, logistic and generalized hyp erb olic

secant distributions, are derived from S and C by op erations suchasBrownian

t t

sub ordination, exp onential tilting, and weak limits, and characterized in various

ways.

Keywords: Riemann zeta function, Mellin transform, characterization of distribu-

tions, Brownian motion, Bessel pro cess, L evy pro cess, gamma pro cess, Meixner pro cess

AMS subject classi cations. 11M06, 60J65, 60E07 2

Contents

1 Intro duction 3

2 Preliminaries 7

3 Some sp ecial recurrences 10

3.1 Uniqueness in Theorem 1 ...... 14

3.2 Some sp ecial moments ...... 16

3.3 Variants of Theorem 1 ...... 18

4 Connections with the gamma pro cess 21

5 L evy measures 22

6 Characterizations 25

6.1 Self-generating L evy pro cesses ...... 28

6.2 The lawof C ...... 31

2

6.3 The lawof S ...... 32

2

6.4 The laws of T and T ...... 35

1 2

^ ^

6.5 The laws of S and S ...... 36

1 2

7 Brownian Interpretations 36

7.1 Brownian lo cal times and squares of Bessel pro cesses ...... 37

7.2 Brownian excursions ...... 38

7.3 Poisson interpretations ...... 40

7.4 A path transformation ...... 43

1 Intro duction

This pap er is concerned with the in nitely divisible distributions generated by some par-

ticular pro cesses with stationary indep endent increments L evy processes [6, 56] asso ci-

ated with the hyp erb olic functions cosh , sinh and tanh . In particular, we are interested

^ ^ ^

in the laws of the pro cesses C; C; S; S; T and T characterized by the following formulae:

for t  0and 2 R

t

 

1

2

1

^

1 E [expi C ] = E exp C =

t t

2

cosh 3

t

 

2

1

^

= E [expi S ] = E exp S 2

t t

2

sinh

t

 

tanh

2

1

^

3 E [expi T ] = E exp T =

t t

2

according to the mnemonic C for cosh, S for sinh and T for tanh. These formulae

^ ^ ^

showhow the pro cesses C , S and T can b e constructed from C ,S and T byBrownian

sub ordination: for X = C; S or T

^

X = 4

t X

t

where :=  ;u  0 is a standardBrownian motion, that is the L evy pro cess such

u

2

that is has Gaussian distribution with E  =0 and E  =u,and is assumed

u u

u

^ ^

indep endent of the increasing L evy pro cess subordinator X .BothC and S b elong to

the class of generalized z -processes [30], whose de nition in recalled in Section 4. The

^ ^ ^

distributions of C , S and T arise in connection with L evy's sto chastic area formula

1 1 1

[40 ] and in the study of the Hilb ert transform of the lo cal time of a symmetricL evy

^ ^

pro cess [9, 25 ]. As we discuss in Section 6.5, the distributions of S and S arise also

1 2

in a completely di erent context, whichisthework of Aldous [2] on asymptotics of the

random assignment problem.

The laws of C and S arise naturally in many contexts, esp ecially in the study of

t t

Brownian motion and Bessel pro cesses [70, x18.6]. For instance, the distribution of

C is that of the hitting time of 1by the one-dimensional Brownian motion . The

1

distribution of S is that of the hitting time of the unit sphere by a Brownian motion in

1

p

3

R started at the origin [17], while =2 S has the same distribution as the maximum

2

of a standard Brownian excursion [15 , 9 ]. This distribution also app ears as an asymptotic

distribution in the study of conditioned random walks and random trees [60, 1]. The

distributions of C and S for t =1; 2 are also of signi cance in analytic numb er theory,

t t

due to the Mellin representations of the entire function

s

1

X

2

1 s

s

1

ss 1 s where s:= n 1 5  s:=

2

 2

n=1

is Riemann's zeta function, and the entire function

s+1

1

n

X

2

4 s +1 1

 s:= 0 L s where L s:=

4

4 4

s

 2 2n +1

n=0

6

is the Dirichlet series asso ciated with the quadratic character mo dulo 4. The functions

 

2 2sand 2s + 1 app ear as the Mellin transforms of S and C resp ectively,and

4 2 1

2 2 4

the Mellin transforms of S , C , T and T are also simply related to  . These results

1 2 1 2

^

are presented in Table 1, where 2 R, s 2 C ,witht=2forT , and n =1; 2;:::.

t

The discussion around 8 and 9 b elow recalls the classical de nitions of the numb ers

A and B app earing in Table 1.

m 2n

Table 1. The Mellin transforms of C , C , S , S , T and T .

1 2 1 2 1 2

1

  

2

s

2n!

X



n 2n

^

2

X E e E X E X =E X 

n

2 2 n!

1

C  2s +1 A

1 4 2n

cosh

2

s+1

2 1 4 1

 2s + 2 C A

2 2n+1

cosh s +1 

12s

2 1

2n

2 2s S 2 2jB j

1 2n

sinh 1 2s

2

2n

S 2 2s 2n 12 jB j

2 2n

sinh

s+1

2 tanh 4 1

A

2n+1

T  2s +2

1

2n+1

2s +1s +1

2

s+2

4 1 tanh 2

A

2n+3

T  2s +4

2

2n+2

2

s +1s +2

See [8] for a recent review of these and other prop erties of the laws of C and S with

t t

emphasis on the sp ecial cases when t =1or2. The formulae in the table for T are

t

derived in Section 6.4 of this pap er. As shown in [9] and [8, x3.3], the classical functional

equations  s= 1 sand s= 1 s translate into symmetry prop erties of

4 4

the laws of S and C , and there is a recipro cal relation b etween the laws of C and S .

2 1 2 1

The formulae for p ositiveinteger moments in the table are read by comparison of the

expansions



n

1 1

1

2

2n

X X

1

2

X

n n 2n

2

^

2

E X  1 E X  = = j j <"; 7 E e

n! 2n!

n=0 n=0

for some ">0, with classical expansions of the hyp erb olic functions see [28, p. 35],

or 83 and 96 b elow. The formulae for moments of C and T for t =1or2in-

t t

volve the numb ers A of alternating permutations of f1; 2;:::;mg, that is p ermutations

m

a ;:::;a  with a >a .TheA are called Euler or secant numbers,and

1 m 1 2 3 2n 5

the A are called tangent numbers, due to the expansions [19, p. 259]

2n+1

1 1

2n 2n+1

X X

1

= A ; tan = A : 8

2n 2n+1

cos 2n! 2n +1!

n=0 n=0

The formulae for moments of S and S involve the rational Bernoul li numbers B , due

1 2 2n

to the expansion

1

2n

X

2 

coth 1= 9 B

2n

2n!

n=1

and the elementary identity1= sinh = coth =2 coth . Because tan = cot

2 cot 2 the tangent and Bernoulli numb ers are related by

2n+2 2n+1

2 12

n

A =1 B : 10

2n+1 2n+2

n +1

The rst few A and B are shown in Table 2. See [21] for a bibliography of the

m 2n

Bernoulli numb ers and their applications.

Table 2. Secant, tangent and Bernoulli numbers.

n 1 2 3 4 5 6

A 1 5 61 1385 50521 2702765

2n

A 2 16 272 7936 353792 22368256

2n+1

1 1 1 1 5 691

B

2n

6 30 42 30 66 2730

ImplicitinTable 1 is Euler's famous evaluation [65 ] of 2n, and so is the companion

evaluation of L 2n 1 given in [22, 1.14 14]: for n =1; 2;:::

4

2n1 2n1 2n

 2 

jB j and L 2n 1 = A : 11 2n=

2n 2n2

4

2n

2n! 2 2n 2!

Our interest in these results led us to investigate the Mellin transforms of S , C and

t t

T for arbitrary t>0, and to provide some characterizations of the various in nitely

t

divisible laws involved. These characterizations are related to various sp ecial features of

these laws: sp ecial recurrences satis ed by their moments and Mellin transforms, simple

relations b etween their moments and cumulants, corresp onding relations b etween the

laws and their L evy measures, and di erential equations satis ed by their Laplace or

Fourier transforms. These analytic results are related to various representations of the 6

laws in terms of Brownian motion and Bessel pro cesses, in particular the heights and

lengths of excursions of a one-dimensional Brownian motion.

The rest of this pap er is organized as follows. Section 2 recalls some background for

the discussion of L evy pro cesses. Section 3 presents some sp ecial recurrences satis ed by

the moments and Mellin transforms of the laws of S and C .Wealsoshow in this section

t t

howvarious pro cesses involved can b e characterized by such moment recurrences. In

Section 4, we brie y review some prop erties of the gamma pro cess, and the construction

of b oth S and C as weighted sums of indep endent gamma pro cesses. Section 6 presents

anumber of characterizations of the in nitely divisible laws under study. Some of these

characterizations were presented without pro of in [8, Prop osition 2]. Section 7 presents

several constructions of functionals X of a Brownian motion or Bessel pro cess such that X

has the distribution of either S or C for some t> 0. There is some overlap b etween that

t t

section and Section 4 of [8]. There we reviewed the large numb er of di erentBrownian

and Bessel functionals whose laws are related to S and C . Here we fo cus attention

t t

on constructions where the structure of the underlying sto chastic pro cess brings out

interesting prop erties of the distributions of S and C , in particular several of those

2 2

prop erties involved in the characterizations of Section 6.

2 Preliminaries

2

For a L evy pro cess X , with E X  < 1 for some and hence all t>0, the charac-

t

t

teristic function of X admits the well known Kolmogorov representation [10 , x28]

t

Z

i X i x 2

t

E [e ] = exp[t  ] with  =i c + e 1 i xx K dx 12

2

Here c 2 R, the integrand is interpreted as =2atx =0,and K = K is the nite

X

Kolmogorov measure asso ciated with X ,

t

2 2

K dx=  dx+x  dx 13

X 0 X

2

with  the parameter of the Brownian comp onentofX , with  a unit mass

t 0

at 0, and with  the usual L evy measure of X . Assuming that the exp onent in

X t

12 can b e expanded as

1

m

X

i 

j j <" 14  = 

m

m!

m=1 7

for some ">0, it follows from 12 that

Z Z

2 2 n

 = c;  =  + x  dx;  = x  dxforn  3: 15

1 2 X n X

According to the de nition of cumulants of a , recalled later in 91

R

2

the nth cumulantof X is  t. In particular, if c = x dxand  = 0, as when

t n X

^ ^

X = S; C ; S or C , the cumulants  of X are just the moments of the L evy measure

n 1

 .

X

For a L evy pro cess X  with all moments nite, it is a well known consequence of the

t

n

Kolmogorov representation 12 that the sequence of functions t ! E X  is a sequence

t

n

of polynomials of binomial type [20]. That is to say, E X  is a p olynomial in t of degree

t

at most n,and

n

X

n

n k nk

E X = E X E X : 16

t+u t u

k

k =0

The co ecients of these moment polynomials are determined combinatorially by the

 via the consequence of 12 and 14 that for 0  k  n

n

 !

k

1

X



m

k n n m

k ![t ] E X =n![ ] 17

t

m!

m=1

p p

where [y ]f y  denotes the co ecientof y in the expansion of f y inpowers of y .

0

Equivalently, starting from E X  = 1, these p olynomials are determined by the follow-

t

ing recursion due to Thiele [31, p. 144, 4.2] see also [44, p.74, Th. 2], [20, Th. 2.3.6]:

for n =1; 2;:::

n1

X

n 1

i n

E X  : 18 E X =t

ni

t t

i

i=0

Table 3 displays the rst few moment p olynomials for veoftheL evy pro cesses

considered in this pap er: the standard gamma pro cess de ned by

1

t1 x

P  2 dx= x e dx t> 0;x > 0; 19

t

t

Brownian motion , and the pro cesses C , S and T . 8

Table 3. Some moment p olynomials

2 3 4

X E X  E X  E X  E X 

t

t t t

t tt +1 tt +1t +2 tt +1t +2t +3

2

0 t 0 3t

2 2 3

t2+3t t16 + 30t +15t  t272 + 588t +420t + 105t 

C t

3 15 105

2 2 3

t t2+5t t16 + 42t +35t  t144 + 404t +420t + 175t 

S

3 45 945 14175

2 2 3

2t 4t7 + 5t 8t124 + 147t +35t  16t2286 + 3509t + 1470t + 175t 

T

3 45 945 14175

The moment p olynomials of and illustrate some basic formulae, whichwe recall

here for ease of later reference. First of all,

t + s

s

 t; 20 E  =

t

t

which reduces to tt +1 t + n 1 for s = n a p ositiveinteger. The identityin

d

2

distribution =2t and 20 give

1=2

t

s

1

 + s

2s t

2s s

2 1

E j j =2t ; 21 =2 

t

1

2

2 s

 

2

where the second equality is the gamma duplication formula. In particular,

2n!

2n

n =0; 1; 2;:::: 22 = E 

1

n

2 n!

For X = C , S or T we do not knowofany explicit formula or combinatorial interpre-

tation for the nth moment p olynomial for general n.Table 5 in Section 4 shows that in

n

, these cases the cumulants  , which app ear in the descriptions 17 and 18 of E X

n

t

turn out to involve the Bernoulli numbers B , which are themselves recursively de ned.

2n

The recursive description of the moment p olynomials via 17 or 18 is consequently

rather cumb ersome. It is therefore remarkable that for X = C; S and T there are simple

recurrences for the moments and Mellin transforms of X ,whichmake no reference to

t

the Bernoulli numb ers. These recurrences are the sub ject of the next section. 9

3 Some sp ecial recurrences

Theorem 1 i The process C is the unique L evy process satisfying the fol lowing mo-

ment recurrence for t  0 and s =1; 2;::::

2 s 2 s s+1

t + tE [C ]= t E [C ]+2s +1E [C ]: 23

t+2 t t

s

Moreover, the recurrencecontinues to hold for al l t  0 and s 2 C ,withE C  an entire

t

function of s for each t.

ii The process S is the unique L evy process satisfying the fol lowing moment recur-

rence for t  0 and s =1; 2;::::

2 s s 2 s1

t + tE [S ]=t 2st 2s +1E [S ]+2st E [S ]: 24

t+2 t t

s

Moreover, the recurrencecontinues to hold for al l t  0 and s 2 C ,withE S  an entire

t

function of s for each t.

iii The process T is the unique L evy process satisfying the fol lowing moment recur-

rence for t  1 and s =1; 2;::::

s s s1

2s + tE [T ]= tE [T ]+2stE [T ]: 25

t t1 t+1

Moreover, the recurrencecontinues to hold for al l t  1 and s 2 C with 1 t=2.

Here and in similar assertions b elow, unique of course unique in law. The

s s

fact that E C  and E S areentire functions of s for each t is easily seen. The

t t

random variables C and S have all p ositivemoments nite, b ecause they have moment

t t

generating functions whichconverge in a neighb ourho o d of 0, and they have all negative

moments nite by application to X = C or X = S of the following general formula: for

t t

X

X a non-negative random variable with ' :=E e ,

X

Z

1

1p

2

p 2p1 2

1

E [X ]= '  d p>0: 26

X

2

p

0

This holds for a p ositive constant X byde nitionofp, hence for every p ositive random

s

variable X byFubini's theorem. Similarly, consideration of 26 shows that E [T ] < 1

t

for real s i s>t=2. See also [69 ] and Yor [70, Exercise 11.1] for other applications

s

of 26 to S . Another application of 26 shows that E T  < 1 i s>t=2. For

t

t

p

d

^ ^

X , X = as in 4, where X maybeC; S or T ,Brownian scaling gives X =

t t X t 1

t

hence

2s 2s s

1

^

: 27 E jX j =E j j E X  

t 1

t

2 10

Using 21 and x +1= xxwe see that

2s+1 2s

1

E [j j ]= 2s +1E [j j ]  : 28

1 1

2

The recurrences in Theorem 1 are equivalent via 27 and 28 to the following recurrences

1

^ ^ ^

for C , S and T : for all t> 0and<s >

2

2 2s 2 2s 2s+2

^ ^ ^

t + tE [jC j ]=t E [jC j ]+ E [jC j ]; 29

t+2 t t

2 2s 2s 2 2s2

^ ^ ^

t + tE [jS j ]=t 2st 2s +1E [jS j ]+ 2s2s 1t E [jS j ]; 30

t+2 t t

and for all t  1ands with <2s > 1and<2s > 1 t

2s 2s 2s2

^ ^ ^

2s + tE [jT j ]= tE [jT j ]+2s2s 1tE [jT j ]: 31

t t1 t+1

The following Lemma presents some recurrences for probability density functions. The

formulae 29 and 30 for s>0 are obtained bymultiplying b oth sides of 33 and

2s

35 by jxj and integrating, using integration by parts, which presents no diculty

since the functions  xand  x are Fourier transforms of functions in the Schwartz

t t

space, hence also memb ers of the Schwartz space. This establishes the recurrences of the

theorem for all real s>0, hence all s 2 C byanalyticcontinuation. A similar argument

allows 31 to b e derived from 37 . More care is required to justify the integrations by

2s

^

parts, but this can b e done using the fact that E jT j  < 1 for all s>0, which is used

t

in the pro of for large s. Theorem 1 follows, apart from the uniqueness claims, whichwe

establish in Section 3.1.

Lemma 2 The density

Z

t

1

^

P C 2 dx 1 1

t

iy x

 x:= e dy 32 =

t

dx 2 cosh y

1

satis es the recurrence

2 2

tt +1 x= t + x  x 33

t+2 t

while

Z

t

1

^

1 y P S 2 dx

t

iy x

= e dy 34  x:=

t

dx 2 sinh y

1

satis es the recurrence

2 2 00 0

+ t  x+2t +4x x + 1 + t2 + t x 35 tt +1 x=x

t t+2

t t 11

and

Z

t

1

^

P T 2 dx 1 tanh y

t

iy x

x:= e dy 36 =

t

dx 2 y

1

satis es the recurrence

0 00

x x+t 1 x= t x+ t x 37

t t1

t t+1

for x 6=0 and t  1.

Remarks. Formula 33 for t =1; 2;::: was given by Morris [44, 5.3]. As noted there,

this recursion and known formulae for  xfor t = 1 or 2 displayed in Table 6 show

t

that  x is a p olynomial of degree t divided bycoshx=2 if t is an o dd integer, and

t

divided by sinhx=2 if t is even. It is known that the classical integral representations

of the b eta function

Z Z

1 1

qcy

pq  ce dy

p1 q 1

= 38 u 1 u du =: B p; q =

cy p+q

1 + e  p + q 

0 1

where c>0;

0; 0, yield the formula [23 , 1.9.5], [32]

2

t2 t2

2 t + ix t ix t + ix 2

= B ; : 39  x=

t

 2 2  t 2

As shown in [9] and [8] the Laplace transforms of C and S can b e inverted to give series

t t

formulae for the corresp onding densities for a general t>0.

Pro of. The recurrence 33 follows from 39 using x +1= xx. In the case of 

t

wedonotknowofany explicit formula like 39 for general t> 0. So we pro ceed by the

following metho d, which can also b e used to derive the recurrence for  without app eal

t

to 39 . By di erentiating 34 with resp ect to x,thenintegrating by parts, we obtain

Z

t+1

1

y t +1

iy x 0

cosh y  x = t e dy : 40 x+ x 

t

t

x sinh y

1

Di erentiating again with resp ect to x, and integrating by parts again, leads to 35 .

Lastly,by standard formulae for Fourier transforms, the recurrence 37 is equivalentto

the fact that g   := tanh = solves the following di erential equation:

t

d

2

 g  +t 1g  =tg   t g  : 41

t t t1 t+1

d

2 12

Examples. To illustrate 35 , from the known results recalled in Table 6, wehave

2

 x= =4= cosh x=2, so deduce from 35 that

1

2 2 2 2

 [6 2 1 + x  6x sinhx + 6 +  1 + x  coshx]



 x=

3

4

x

16 cosh

2

and  x can b e derived similarly from  x, but the result is quite messy.We note in

4 2

^

passing that the density  xof S app ears for n =1; 2;::: in the formula of Gaveau

n n

[27 ] for the fundamental solution of the heat equation on the Heisenb erg group of real

dimension 2n +1.Asshown byGaveau, this is closely related to the app earance of the

^

distribution of S in connection with L evy's sto chastic area formula [40 ].

1



s

From the Mellin transform E [ S  ]= 2 2sinTable 1 we deduce with 24 that

2

2

2

s



E [ S  ]= [s 12s 3 2s+2s2s 2] s 2 C : 42

4

2

3

Using  s= 1 s and the duplication formula for the gamma function, formula 42

m

can also b e deduced by analytic continuation of the series for E S for m>0 derived

4

3m2 3m

from 26 in [70, Exercise 11.1] which should b e corrected by replacing 2 by2 .

In particular, 42 implies

1

4

2n

X

2

2n

=1+ 2n 12n 3nB n 1B  43

2n2 2n

sinh 32n!

n=1

2n n n n

where the series converges for j j <, and the co ecientof is 1 E S =2 n!.

4

Formula 43 can also b e checked using 96 and 52 b elow.

As a generalization of 42 , we deduce using 24 that for n =1; 2;:::

n1

X

s



]= E [ S  b s 2s 2j  s 2 C  44

2n n;j

2

j =0

where the b sfor0 j  n 1 are p olynomials in s with real co ecients, of degree

n;j

at most 2n 1, which are determined by b s = 2 and the following recurrence: for

1;0

n =1; 2;:::

2

2n2n +1b s= 2n 2s2n +1 2sb s1j0:

n+1;j n;j n;j

By combining Theorem 1 and the results of Table 1, similar descriptions can b e given

  

s s s

for E [ C  ] and E [ S  ]involving  , and for E [ C  ]involving  .

2n 2n1 2n1 4

2 2 2 13

3.1 Uniqueness in Theorem 1

By consideration of moment generating functions, to establish the uniqueness claims in

Theorem 1 it is enough to show that the recursions in Theorem 1 determine the p ositive

integer moments of C , S and T for all t> 0. We complete the pro of of Theorem 1

t t t

by establishing the following corollary, which presents the desired conclusion in more

combinatorial language.

Corollary 3 Each one of the fol lowing threerecursions 45 , 46 and 47 , with p t=

0

1, de nes a unique sequenceofpolynomials p t;n =0; 1; 2;::: of binomial type:

n

2 2

t + t p t +2= t p t+2n +1p t; 45

n n n+1

2 2

t + t p t +2= t 2nt 2n +1p t+2nt p t; 46

n n n1

2n + tp t=tp t 1 + 2ntp t +1: 47

n n n1

The corresponding generating functions

1

1

2 n

X

 

2

G  := p t j j <=2: 48

t n

n!

n=0

t t 1 t

are 1= cos  for 45 ,  = sin  for 46 ,and  tan  for 47 .

Pro of. That the p olynomials de ned by the generating functions satisfy the recursions

follows from the result established in the previous section that the moments of the

asso ciated L evy pro cesses satisfy these recursions. Or see the remarks b elow. For 45

the uniqueness is obvious. To deal with uniqueness for 46 , we consider this recurrence

with

2

t 2nt 2n +1= 2n2n 1 + 1 4nt + t 49

2

replaced by + t + t , and argue that the solution will b e unique provided 6=0

n n n

and

2f= 3; 5;:::;2n +1g 50

n

for all n, as is the case in 49 with =1 4n. Supp ose that p tsolves the recurrence.

n n

Take t = 0 and use 6= 0 to see that p 0 = 0 for all n.For n = 1 the recurrence

n n

amounts to

=2 and p t= 2t=3 :

1 1 1

P

n

j

So any solution of the recurrence mustbeoftheformp t= a t for some array

n n;j

j =1

of co ecients a . Assume inductively that suitable co ecients a exist for some

n;j n1;j 14

n  2. The recurrence amounts to a system of n + 3 co ecientidentities obtained by

k

equating co ecients of t for 0  k  n + 2. These co ecientidentities are trivial for

k = 0 and k = n + 2, leaving a system of n + 1 linear equations in n unkowns a ; 1 

n;j

n+1

j  n. The identity of co ecients of t reduces easily to a 2n +1 = a

n;n n n n1;n1

which determines a by 50 . For 2  k  n it is easily checked that the identityof

n;n

k

co ecients of t involves only a for j  k 1anda , and that the co ecient

n;j n1;k 2

of a in this identityis2k 1 + 1 6=0by 50 . Thus for each2  k  n

n;k 1 n

the co ecient a can b e expressed in terms of the a for j  k and a .This

n;k 1 n;j n1;k 2

completes the inductive pro of of uniqueness for 46 . A similar argument establishes

uniqueness for 47. 2

2

Remarks. For the recurrence 46 , with t 2nt 2n + 1 replaced by + t + t ,

n n

the identity of co ecients of t reads

n

X

0 j

p 2 = p 0 or a 2 = a : 51

n n n;j n n;1

n

j =1

In general, this identityprovides a constrainton and which is necessary for the

n n

generalized recurrence to admit a solution. That 51 holds for the p t generated by

n

t

G  = = sin  with =2n2n 1 can b e checked using formula 17 with k =1

t n

^

and the expressions for the moments and cumulants of S displayed in Table 6. Weshow

2

later in Theorem 8 how this identity 51 provides simple characterizations of the laws

^

of S and S .

2 2

The recursions can also b e checked byshowing that the corresp onding generating

function G   satis es a suitable di erential equation. For instance, by routine manip-

t

ulations, the recursion 46 is equivalent to the di erential equation

2 2 2 2 0 2 00

t + t G  = t + t + t G   2 tG  + G   52

t+2 t

t t

t

where the primes denote di erentiation with resp ect to . But if G  =G  ,

t

t

then after dividing b oth sides byG  , the equation 52 reduces to an equalityof

2

co ecients of t and an equality of co ecients of t ,which read resp ectively

2

0 00

G G

2 2

1+G + =0 53

G G

and

0 00

G G

2 2 2

1 + G +2 =0: 54

G G 15

For G = = sin wehave

0 00

G 1 G 2cot 1

2

= cot and = +cot + 55

2

G G

sin

which imply 53 and 54 , hence 52 . The corresp onding di erential equation for

1 t

G  = tan  app eared in 41 , expressed in terms of g  :=G i . The recur-

t t t

sion 47 for this case is a generalization of the recursion

T n +1;k=T n; k 1 + k k +1T n; k +1

found by Comtet [19, p. 259] for the array of p ositiveintegers T n; k  de ned by

P

k n t

tan  =k != T n; k  =n!. The di erential equation for G  =1= cos  ap-

t

nk

p ears b elow, again in terms of g  :=G i , in the argument leading to 69. In this

t t

case, the p olynomials p tevaluated for t a p ositiveinteger are related to the numb ers

n

P

n k

E n; k  =n!. These Euler numbers of order k were E n; k  de ned by1= cosh  =

n

studied by Carlitz [14].

3.2 Some sp ecial moments

For X = C we nd that 26 for p =1=2 reduces using 38 to the simple formula

t

t=2

1=2

p

: 56 ]= E [C

t

2t +1=2

1=2 1=2

1

], As a check, the recursion 23 for s = simpli es to t +1E [C ]= tE [C

t

t+2

2

which is also implied by 56 and x +1= xx. The following prop osition presents

s

some explicit formulae for E [S ] in particular cases which corresp ond to a simpli cation

t

in the recursion 23 for this function of s and t.

Prop osition 4 For al l t>0

t1=2

2 t=2

t1=2

p

57 E [S ]=

t



and

2

p

t=2

t2=2

t4=2

E [S ]=  2 : 58

t

t +1=2 16

Remarks. Comparing formulae 56 , 57 and 58 , weobserve the remarkable identity

t2=2 1=2 t1=2

2E [S ]= E [C ]E [S ] 59

t t t

for all t>0, but wehave no go o d explanation for this. We also note that the exp ectations

p

in 56 , 57 and 58 are closely related to the moments of j j and A,whereA has

1

the arc sine lawon[0; 1]. Sp eci cally the exp ectations in 56 , 57 and 58 are equal to

p p

1=2 t1 t1 3=2 t1

=2 E [ A ], E [j j ]and=2 E [j j A ] resp ectively, where and A

1 1 1

are assumed indep endent. But we do not see any go o d explanation of these coincidences

either. As a check, the case t = 2 of 57 can also b e read from Table 1 using  1 = 1=2.

Pro of. Observe from 24 that

s s1

1 + tE [S ]=2stE [S ]if t =2s 1or2s: 60

t+2 t

Use these recurrences on one side, and x +1= xx on the other side, to see that it

suces to verify 57 and 58 for 0

use of 26 with p =1 t=2, so 2p + t 1=0andtherighthandsideof26reduces

to a b eta integral. The case t = 1 is trivial, and the formula is obtained for t 2 2; 3],

by the recurrence argument. The case t 2 1; 2] is lled in by analytic continuation,

t1=2

using the following variant of 26 to show that E [S ] is an analytic function of t

t

for

Z

1

1+p

p2 d

p 2 1

E X = : 61 1 ' 

X

2

2p+1

1 p

0

This formula, which app ears in [61, p. 325], is easily veri ed using Fubini's theorem,

In the case of 58, for 0

The integral in 26 for X = S can then b e evaluated using the result of di erentiation

t

of 38 with resp ect to p, which is [28 , p. 538, 4.253.1]

Z

1

p1 q 1

u 1 u log udu = B p; q [ p  p + q ] 62

0

0

for

0; 0; where  x:=x=x is the digamma function. For p = t=2and

q =1 t we nd using the re ection formulae for  and that

t=21 t=2

 t=2  1 t=2 =  cot t=2= 

1 t=21 + t=2

and the right hand expression in 58 is nally obtained after simpli cation using the

gamma duplication formula. 2 17

Comparison of the formulae 57 and 58 with those obtained from 61 yields the

t

following twoevaluations of integrals involving 1= sinh  :

Z

1

1 3 t=2t=2 1

p

d = 1

t t

sinh   t 1

0

and

p

Z

1

2

 4 t=2t=2 1 1

= d 2

t t

sinh  2t 2t +1=2

0

Asacheck, 63 for t = 2 can b e obtained byFourier inversion of 88 , using the

in Table 5. We also con rmed these evaluations for various t by expression for 

^

S

numerical integration using Mathematica. But wedonotknowhow to prove them

analytically without the Fourier argumentinvolved in the recursion 35 , which yielded

n

24 and 60 . For X a p ositive random variable with E X  < 1 for some n =

0; 1; 2;:::,and n

 !

Z

n

1

k

X

1 d 

s k

'  E X = E X  : 65

X

s+1

s  k !

0

k =0

1

2 t t

With '   replaced 1= cosh  or  = sinh  , these formulae 26 , 61 and 65 give

X

2

s s

 for all real s except s =1; 2; 3;:::, when the moments andE S expressions for E C

t t

can b e obtained from the moment p olynomials discussed in Section 3. Comparison of

65 with 57 and 58 then gives two sequences of integral identities.

3.3 Variants of Theorem 1

We start by writing 29 in the functional form

2 2 2

^ ^ ^ ^

C ] + E [C E [f  C ] = t t + tE [f  C ] 66 f 

t t+2 t

t

where f is an arbitrary b ounded Borel function. This follows from 29 rst for symmetric

f by uniqueness of Mellin transforms, then for general f using

~ ~

^ ^ ^

E [f C ] = E [f C ] = E [f jC j] where f x:=f x+f x=2:

t t t

a

Consider now the Meixner process M , that is the L evy pro cess whose marginal

^

laws are derived byexponential tilting from those of C , according to the formula

a

t

^ ^

E [f M ] = cos a E [f C  expaC ] t  0; =2

t t

t 18

a

^

The functional recursion 66 for C generalizes immediately to show that X = M

satis es the functional recursion 68 presented in the following theorem. There is a

a

^

similar variant for M of the moment recursion 29 for C . These observationsleadus

to the following characterizations

Theorem 5 AL evy process X satis es the functional recursion

2 2 2

ct + tE [f X ] = t E [f X ] + E [X f X ] 68

t+2 t t

t

for al l bounded Borel f and al l t  0, for some constant c, if and only if X is a Meixner

a 2

process M for some a 2 =2;=2; then c =1= cos a  1 and 68 holds for al l

Borel f such that the expectations involvedare wel l de ned and nite.

Before the pro of, we note the following immediate corollary of this theorem and the

discussion leading to 66 :

^

Corollary 6 The process X = C is the unique L evy process such that either

i the moment recursion 29 holds al l s =0; 1; 2;::: and the distribution of X is

1

symmetric about 0,or

ii the functional recursion 68 holds with c =1 for al l bounded Borel f .

a

Pro of of Theorem 5. That X = M satis es 68 has already b een established via

^

the moment recursion 23 for C . This can also b e seen byreversing the steps in the

following pro of of the converse, which parallels the analysis around 52 . Supp ose that

2

X satis es 68 . By consideration of 68 for f constant, it is obvious that E X  < 1

1

2

and c =1+E X  . Thus c  1andX has characteristic function g with two

1 1

0 00 i x

continuous derivatives g and g .Nowtake f x=e in 68 to obtain the following

identity of functions of :

2 t+2 2 t t 00 2 t 2 t2 0 2 t1 00

ct + tg = t g g  = t g t tg g  tg g

t t

where all di erentiations are with resp ect to , and for instance g  =g   . Can-

t 2

celling the common factor of g and equating co ecients of t and t,thisamounts to the

pair of equalities

0 2

0 0

g g

2

= cg = 1 69

g g

The argument is completed by the following elementary result: the unique solution g of

the di erential equation

0 2

0 0

g g

 

0

= 1 with g 0 = 1 and g 0 = i tan  for  2  ;  70

2 2

g g 19

is g  =cos= cosh  + i. 2

^ ^

For S instead of C the following functional recursion can b e derived from 24 . For

f with twocontinuous derivatives, and t  0, let

2 2 00 0

1

L f x:= x + t f x txf x:

t

2

Then for all t  0

^ ^ ^

tt +1E [f S ] = tt +1E [f S ]+2E [L f S ]: 71

t+2 t t t

x

As a check, for f x=e this relation reduces to the previous equation 52 . There is

^

also a variant for the family of pro cesses derived from S by exp onential tilting. Presum-

ably this could b e used to characterize these pro cesses, by a uniqueness argument for

the appropriate variation of the di erential equation 52 . Similar remarks can b e made

^ ^

for T instead of S .

To give another application of these recurrences, for anyL evy pro cess X sub ject to

appropriate moment conditions, the formula

n

X

n

n k nk

P y; t:= E [y + X  ]= E [X ]y

n t

t

k

k =0

n

de nes a p olynomial in twovariables y and t. Using E X jX =P X ;u t for

t n t

u

0  t  u,itiseasilyshown that for each u 2 R, in particular for u = 0, the pro cess

P X ;u t;t  0

n t

is a martingale. In other terms, P y; tisa space-time harmonic function for X .

n

The formulae 68 and 71 yield recurrences for these space-time harmonic p olynomials

P y; t in the particular cases when X is a Meixner pro cess, or when X = S . These

n

space-time harmonic p olynomials are not the same as those considered bySchoutens

and Teugels [58], b ecause for xed t the P y; t are not orthogonal with resp ect to

n

P X 2 dy . In particular, for X a Meixner pro cess the p olynomials P y; t are not the

t n

Meixner p olynomials, and their expression in terms of Meixner p olynomials app ears to

involve complicated connection co ecients. Thus there do es not seem to b e any easy way

to relate the recurrence for the P y; t deduced from 68 , whichinvolves evaluations

n

with t replaced by t + 2, to the classical two-term recurrence for the Meixner p olynomials

[3, p. 348], in which t is xed. 20

4 Connections with the gamma pro cess

The following prop osition presents some elementary characterizations of the gamma pro-

cess   in the same vein as the characterizations of C , S  and related pro cesses

t t t

provided elsewhere in this pap er. Recall that the distribution of can b e characterized

t

by the density19,by the moments 20 , or by the Laplace transform

t

1

E exp  = : 72

t

1+

Prop osition 7 The gamma process  ;t  0 is the unique subordinator with any one

t

of the fol lowing four properties:

i for al l m =0; 1;:::

m m+1

tE [ ]= E [ ]; 73

t+1 t

ii for al l bounded Borel f

tE [f  ] = E [f   ]; 74

t+1 t t



1

iii ':=E [e ] solves the di erential equation

0

' 

= '; 75

'

iv the L evy measure  of   is such that

t

P  2 dx= x dx: 76

1

Pro of. From 74 we deduce

t+1 0 t1

t' =' ' t

and hence the di erential equation 75 , whose unique solution with '0 = 1 is obviously

'=1=1 + . It is elementary and well known that satis es 76, with b oth sides

x

equal to e dx, and 75 is the Laplace transform equivalentof76. 2

Following [16], [5] and [8], let  ;t  0 b e a sequence of indep endent gamma

n;t

pro cesses, and consider for >0 the sub ordinator  ;t  0 which is the following

;t

weighted sum of these pro cesses

1

X

2

n;t

 := ;t  0: 77

;t

2 2

  + n

n=0 21

The weights are chosen so by 72

8

>

>

 

1

t

1

t

<

2

1= cosh  if =

Y

1

2

2



;t

2

= = 78 E e 1+

2 2

>

  + n

>

t

n=0

:

 = sinh  if =1

where the second equality expresses Euler's in nite pro ducts for cosh and sinh .Thus

d d

C  = andS  = : 79

1

t t 1;t

;t

2

Consider now the sub ordinated pro cess  ;t  0 derived from Brownian motion



;t

and the sub ordinator  ;t  0 as in 77 . As shown in [5],

;t

d

1 0

 80 =  log  =



;1

d

1

0 0

where and are indep endent, with = . By 79, formula 80 for =

2

^ ^ ^ ^

and = 1 describ es the distributions of C and S resp ectively. Thus C and S are

1 1

instances of L evy pro cesses X such that for some a>0;b > 0and c 2 R there is the

d

0 0

equality in distribution X c=b = log  =  for some ; > 0 where and are

a

d

0

indep endent with = . These are the generalized z -processes studied by Grigelionis

0

, known as a z -distribution, has found applications [30 ]. The distribution of log  =

in the theory of statistics [5], and in the study of Bessel pro cesses [50].

5 L evy measures

ForaL evy pro cess X whose L evy measure  has a density,let x:= dx=dx

X X X

b e this L evy density. Directly from 76 and 77, the sub ordinator  has L evy density

at x>0 given by

8

>

>

1

1

<

 x if =

X

C

1

2 2

2

  +n x=2

 x= e = 81



>

x

>

n=0

:

 x if =1

S

Integrating term by term gives

Z

1

s

1

X

1 2

s

1

x  xdx = s   82



2

2 2s

  + n

0

n=0 22

1

or 1 involves Riemann's zeta function. On the other hand, from 2 and which for =

2

12 we can compute for 0 6= j j <

Z

1

1

2 d 1

x

2

xe  xdx = log = coth : 83

S

d sinh

0

By expanding the leftmost expression of 83 in p owers of , and comparing with 82

and the expansion 9 of coth 1, we deduce the descriptions of the L evy measure of

S given in the following table, along with Euler's formula for 2ndisplayed in 11 . In

1

this table, x>0; , and n =1; 2;:::.

2

Table 4: The L evy densities of C ,S and T

R R

1 1

 dx

n s

X

X  x=  X = x  xdx x  xdx

X n 1 X X

dx 0 0

1

s

X

2 n 1!

2 2

1 3n1  n x=2

S x s 2s 2 jB j e

2n

2s

 2n!

n=1

1

s

X

1

2 2

2 n 1!

 n  x=2

1 s n 3n1

2

e s 2s jB j C x 4 1 4 12

2n

2s

 2n!

n=1

s

2 n 1!

s n 3n1

T  x  x 4 2 4 22 jB j s 2s

C S 2n

2s

 2n!

The formulae for moments of the L evy measure of C followimmediately from those

for S and the formula

x 1

 =  x+  x 84

S S C

4 4

which is easily checked using the series for  xand  x. Put another way, 84

S C

amounts to

d

4S = S + C 85

t t t

where S and C are assumed indep endent. By the Laplace transforms 78, this is just

t t

a probabilistic expression of the duplication identitysinh2 = 2 sinh cosh . Similarly,

the formula

 x=  x+  x

C S T

corresp onds to the identity in distribution

d

C = S + T 86

t t t 23

where S and T are indep endent. By the Laplace transforms 78, this is a probabilistic

t t

expression of the identity1= cosh = = sinh tanh = .Knight [38] discovered the

decomp osition 86 of C using a representation of S , T and C in terms of Brownian

1 1 1 1

motion, whichwe recall in Section 7.

From Table 4 we deduce the formulae presented in the next table, where x 2 R;

1

and n =1; 2;::::

2

^ ^ ^

Table 5: The L evy densities of C ,S and T

R R

 dx 1 1

^

2n 2s

X

^ ^

X  x:=  X = x  xdx jxj  xdx

^ ^ ^

2n 1

X X X

dx

1 1

 jxj

coth 2n1 1

2

22s 2

^

2s jB j S

2n

2s

2jxj  n

n

1 22s 4

s n

^

C 2s jB j 4 1 4 1

2n

2s

2x sinh x=2  2n

n

22s 4

s n

^

T jB j  x  x 4 2 4 2 2s

^ ^

2n

C S

2s

 2n

The formulae for Mellin transforms and integer moments of  are read from those

^

X

of X in the previous table using the following fact: if X is a sub ordinator without drift,

then by [56 , 30.8]

Z Z

1 1

s 2s 2s

1

x  dx s> jxj  dx=E [j j ] : 87

^

T 1

T

2

0 1

2s

where E [j j ] is given by 27. The formulae for  xand x can also b e checked by

^ ^

1

S C

inverting the following Fourier transforms, obtained from the Kolmogorov representation

12 :

Z

1

2

d 1 1

2 i x

; 88 xdx = x e  log =

^

S

2

2 2

d sinh

sinh

1

Z

1

2

d 1 1

2 i x

x e  xdx = : 89 log =

^

C

2

2

d cosh

cosh

1

See [18, p. 261], [30] for variations and applications of 88. By an obvious variation of

d

^ ^ ^

85 , there is the identity2S = S + C ,so

t t t

1 x

 x=  x: 90 

^ ^ ^

C S S

2 2 24

This allows each of the formulae in the table for  x and  x to b e deduced from

^ ^

C S

the other using the elementary identity coth z=2 coth z =1= sinh z . Similar remarks

^

apply to the formulae for T , using 86 .

6 Characterizations

m

Recall that for a random variable X with E [jX j ] < 1 the cumulants  X  for 1 

n

n  m are de ned by the formula

m

n

X

i 

m i X

 as ! 0 with 2 R. 91 + o log E [e ]=  X 

n

n!

n=1

The following table collects together formulae for the characteristic functions, proba-

^ ^ ^

bility densities, even moments and even cumulants of C , S and T for t = 1 or 2. Except

t t t

^ ^

p erhaps for T , these formulae are all known. As indicated in the table, for X := ,

2 X

as in 4 , for n =1; 2;::: the nth momentorcumulantof X is obtained from the 2nth

2n n

^

moment or cumulantof X by simply dividing by E  = 2n!=2 n!. For moments

1

this is just 27 , and the companion result for cumulants is easily veri ed. Thus the

moments and cumulants of C , S and T for t = 1 or 2 can also b e read from the ta-

t t t

ble. The cumulants in particular are already determined byTable 5 and the sentence

following 15 . There is no such simple recip e for recovering the densityof X from that

^

of X := , b ecause the elementary formula

X

Z

1

2

P X 2 dt x

^

p

P X 2 dx=dx exp x 2 R 92

2t

2t

0

^

shows that recovering the densityof X from that of X amounts to inverting a Laplace

transform. As indicated in [8,Table 1], the densities of C and S for t =1; 2 are known

t t

to b e given by in nite series related to derivatives of Jacobi's theta function. But we

shall not make use of these formulae here. The distributions of T and T are describ ed

1 2

in Section 6.4. 25

^ ^ ^ ^ ^ ^

Table 6. Features of the laws of C , C , S , S , T and T .

1 2 1 2 1 2

^

2n! 2n!

i X

2n n

^ ^ ^ ^

X E e  P X 2 dx=dx E X =  X = E X   X 

2n n

n n

2 n! 2 n!

1 1

^



C A A

1 2n 2n1



cosh

2cosh x

2

2

x 1

^



C A 2A

2 2n+1 2n1



cosh

2sinh x

2



n

4

n

^



jB j S 4 2jB j

2n 1 2n

2

 2n

sinh

4 cosh x

2

 

2

  

x coth x 1

n

2 2 2 4

n

^



2n 14 jB j jB j S

2n 2n 2

2

n 

sinh

x sinh

2

n n

tanh 1 

4 24

A

2n+1

^

T jB j log coth jxj

1 2n

2n+1 2n

 4

Z

2

1

n n

tanh y y jxjdy

4 24

A

2n+3

^

jB j T

2n 2

2n+2 n

2 sinh y=2

jxj

^ ^ ^ ^

The formulae for the densities of C , C , S and T are well known Fourier transforms

1 2 1 1

^

[40 ], [23 , 1.9], [32], [9]. The Fourier transform expressed by the densityof S was found

2

^ ^

in [9]. The density of T is derived from the density of T using 37 for t =1,

2 2 1 1

which reduces to

00 0

^

x=x x= P C 2 dx=dx 93

2

2 1

^

where the second equality is read from the formulae in the table for the densities of T

1

^

and C .Thus

2

00 +

^

x=E [C jxj ]: 94

2

2

In particular

Z Z

2

1 1

2

tanh 1 y dy 14 3

0 = : 95 = d =

2

3

 2sinh y=2 

0 0

The formulae for moments and cumulants are equivalent to classical series expansions

of the hyp erb olic functions [28 , p. 35] involving the secant and tangentnumb ers A

m

and Bernoulli numb ers B .For instance, by di erentiation of the expansion of coth

n

displayed in 83 ,

1

2

2n

X

n+1 2n

2n 11 B 2 = 0 < j j < 96

2n

sinh 2n!

n=0 26

^

from whichwe read the moments of S . Table 6 reveals some remarkable similarities

2

^ ^

between moments and cumulants, esp ecially for C and S . These observations lead to

2 2

the following

Theorem 8 i Let X bearandom variable with al l moments nite and al l odd moments

equal to 0. Then

d

2n

^

X = C   X =2E X  n =0; 1; 2;::: 97

2 2n+2

while

2n

2 E X 

d

2

^

X = S  E X = and  X = n =1; 2;:::: 98

2 2n

3 n2n 1

ii Let X bearandom variable with al l moments nite. Then

n

E X 

d

X = C   X = n =0; 1; 2;::: 99

2 n+1

1

n +

2

while

n

E X  2

d

and  X = n =1; 2;:::: 100 X = S  E X =

n 2

3 n2n 1

Remarks. Similar but less pleasing characterizations could b e formulated for other

^

variables featured in Table 6. For instance, the results for C and C would involve the

1 1

ratio A =A for which there is no simple expression. In Section 7 weinterpret the

2n 2n1

identities 99 and 100 in terms of Brownian motion. Later in this section wegive

several variations of these identities.

Pro of. Each of the four implications = is found by insp ection of Table 6. These

prop erties determine the moments of these four distributions uniquely b ecause for any

n

;n =1; 2;::: and random variable X with all moments nite, the moments E X

1

1

cumulants  :=  X ;n =1; 2;::: determine each other via the recursion 18 with

n n 1

t = 1. Since each of the four distributions involved has a convergentmoment generating

function, each of these distributions is uniquely determined by its moments. 2

In the previous theorem the four distributions involved were characterized without

assuming in nite divisibility, but assuming all moments nite. The following corollary

presents corresp onding results assuming in nite divisibility, but with only a second mo-

ment assumption for most parts. 27

Corollary 9 Let X ;t  0 be the L evy process associated with a nite Kolmogorov

t

measure K via the Kolmogorov representation 12, and let U bearandom variable

X

with uniform distribution on [0; 1], with U independent of X . Then

2

i For each xed t>0, assuming that the distribution of X is symmetric,

t

d

^

X = C  K dx=P X 2 dx 101

t t X 2

while

2

K dx P X 2 dx d

d

X 2

^

= X = S  102

t t

2

dx dx dx

where for the implication = in 102 it is assumed that K has a density k x:=

X

0

K dx=dx with two continuous derivatives, and that both k x and k x tend to 0 as

X

jxj!1.

ii For each xed t>0, without the symmetry assumption,

d

2

X = C  K dx=xP U X 2 dx 103

t t X 2

while

2

d

2

and K dx=xE [1 U X 1U X 2 dx]1x> 0 104 X = S  E X =

X 2 2 t t 2

3

Pro of. Each of the implications = follows easily from corresp onding results in the

previous theorem, using 15. So do the converse implications, provided it is assumed

that X has all moments nite. That a second moment assumption is adequate for the

2

converse parts of 101 , 103 and 104 is a consequence of results proved later in the

pap er. We refer to Theorem 10 for 101 , to Prop osition 11 for 103, and to Prop osition

12 for 104 . 2

6.1 Self-generating L evy pro cesses

Morris [44] p ointed out the implication = in 97 , and coined the term self-generating

foraL evy pro cess X  whose Kolmogorov measure K is a scalar multiple of the

t X

distribution of X for some u  0:

u

K dx

X

= P X 2 dx: 105

u

K R

X

To indicate the value of u and to abbreviate, say X is SGu. In particular, X is SG0

2 2

i  =i c +  =2, whichistosay X is a Brownian motion with drift and variance 28

2

parameters c and  . We see from the Kolmogorov representation 12 that X is

t

SGui

00

 

=expu  : 106

00

0

Written in terms of g   = exp   for u = 2, this is just the rst di erential equation

^

for g in 69 . To restate either Corollary 6 or 101 , the process X = C is the unique

2

symmetric SG2 L evy process with E X =1.

1

It is easily seen that for u>0;a > 0;b 6=0

X ;t  0 is SGui aX ;t  0 is SGu=b. 107

t bt

Also, if X is SGu and the moment generating function M  :=E [expX ] = g i 

1

 

is nite for some  2 R, then the exp onentially tilted pro cess X ;t  0 with

t

 

t x

P X 2 dx=M  e P X 2 dx

t

t

^

is easily seen to b e SG . The self-generating L evy pro cesses obtained from C by these

op erations of scaling and exp onential tilting have b een called generalizedexponential

hyperbolic secant processes [32, 44, 42]. But we prefer the briefer term Meixner pro-

cesses prop osed in [58], which indicates the relationship b etween these pro cesses and the

Meixner-Pollaczek p olynomials, analogous to the well known relationship b etween Brow-

nian motion and the Hermite p olynomials [58, 57 ]. See also Grigelionis [29]. Another

self-generating family,whichisa weak limit of the Meixner family [44], is the family of

gamma pro cesses b ;t  0 for 0 6= b 2 R and a>0. Then, the orthogonal p olyno-

at

mials involved are the Laguerre p olynomials [57 ]. Morris [44, p. 74, Theorem 1] states

that the Poisson and negative binomial pro cesses are self-generating, but this is clearly

not the case. Rather, the collection of examples mentioned ab ove is exhaustive:

Theorem 10 The only L evy processes X  with the self-generating property 105 for

t

some u  0 areBrownian motions with u =0, and Meixner and Gamma processes

with u>0.

Pro of. The characterization for u = 0 is elementary, so consider X whichis SGu for

some u> 0. Observe rst that X cannot have a Gaussian comp onent, or equivalently

that K hasnomassat0.For a Gaussian comp onentwould make X have a density,

X u

implying P X = 0= 0incontradiction to 105 . Similarly, X cannot have a nite L evy

u

measure in particular X cannot have a lattice distribution b ecause then P X =0> 0

u

whichwould force K to have an atom at 0. By use of the scaling transformation 107 ,

X 29

the problem of characterizing all L evy pro cesses X that are SGu for arbitrary u>0is

reduced to the problem of characterizing all L evy pro cesses X that are SGu for some

particular u, and the choice u = 2 is most convenient. Also, by suitable choice of a in

00

107 we reduce to 106 with 0 = 1. So it is enough to nd all characteristic

exp onents   suchthat

00

  = exp2   with 0 = 0: 108

Set

D  :=1=E [expi X ] = exp[  ] 109

1

so 108 is equivalentto

00 0 2

DD D  = 1 with D 0 = 1: 110

According to Kamke [36, p.571, 6.111] the general solution of 110 is

cosh cosh b + b

D  =

b

cosh b

for some b 2 C , including the limit case when cosh b = 0. In particular, for b = ia with

a 2 =2;=2 we nd

cosh cos a + ia

D  = 111

ia

cos a

corresp onding to a Meixner pro cess, and the limit case a = =2 corresp onds to 

for the standard gamma pro cess. Other choices of a 2 R yield the same examples, by

symmetries of cosh and cos. To complete the argument, it suces to showthat1=D  

ia

is not an in nitely divisible characteristic function if a=2 R.For D derived by 109 from

aL evy pro cess X wehave

0

D 0 = i where  = E X  2 R

1

whereas

0

0 = sinhia= i sin a: D

ia

This eliminates the case when sin a=2 R, and it remains to deal with the case sin a 2

2

2

Rn[1; 1]. In that case cos a =1 sin a< 0 implying that cos a = iv for some real

v 6=0. But then, since cosh is p erio dic with p erio d 2i, the function D   in 111

ia

is periodic with period 2=v, hence so is 1=D  . If 1=D  were the characteristic

ia ia

function of X , the distribution of X would b e concentrated on a lattice, hence the L evy

1 1

measure of X would b e nite. But then X could not b e self-generating, as remarked at

the b eginning of the pro of. 2 30

6.2 The lawofC

2

We start by observing from 3 , 1 and 12 that

p

Z

1

p

2 d tanh

T x 1

1

p

E [e = log cosh ]= 2  = e x K dx 112

C

d

2

0

where K is the Kolmogorov measure of C . Thus we read from Table 4 that

C t

1

X

1

2 2

K dx P T 2 dx

C 1

 n  x=2

2

= = e : 113

dx xdx

n=1

According to 113 and the prop ertyof C displayed in 103 , if U has uniform distribution

on [0; 1] and U and C are indep endent, then

2

d

2

T = U C : 114

1 2

This also has a Brownian interpretation, indicated in Section 7.2. But in this section we

maintain a more analytic p ersp ective, and use these identities in distribution to provide

some further characterizations of the lawofC . See Section 6.4 for more ab out the

2



distribution of T .For a non-negativerandomvariable X with E X  < 1 let X denote

1

a random variable with the size-biased or length-biased distribution of X , that is



P X 2 dx=xP X 2 dx=E X :



As discussed in [62 , 4 ], the distribution of X arises naturally b oth in renewal theory,

X

and in the theory of in nitely divisible laws. For   0let' :=E [e ]. Then

X



 X 0 0

the Laplace transform of X is E [e ]= ' =E X where' is the derivativeof

X X

' . According to the L evy-Khintchine representation, the distribution of X is in nitely

X

divisible i

Z

1

0

' 

X

x

xe dx = c +

' 

X

0

for some c  0 and some L evy measure , that is i



X

]= ' '  E [e

X Y

where Y is a random variable with

P Y 2 dy = c dy +dy =E X : 115

0 31

Thus, as remarked byvan Harn and Steutel [62], for a non-negative random variable X

with E X  < 1, the equation

d



X = X + Y

is satis ed for some Y indep endentofX if and only if the lawofX is in nitely divisible,

in which case the distribution of Y is given by 115 for  the L evy measure of X . See

also [4] for further discussion. In this vein, we nd the following characterizations of the

lawof C :

2

Prop osition 11 For a non-negative random variable X with Laplacetransform ':=

E [expX ], the fol lowing conditions areequivalent:

2

p

d

2 ; i X = C ,meaning that ' = 1= cosh

2 X

ii ' = ' solves the di erential equation

X

Z

1

0

' 

2

=2 'u du; 116

'

0

iii E X =2 and

d

 2

~

X = X + U X 117

d

~ ~

where X; X and U are independent random variables with X = X and U uniform on

[0; 1];

q

1

2

'   satis es the di erential equation iv E X =2 and the function   :=1=

X

2

00 0 2

    =1 on 0; 1: 118

Pro of. This is quite straightforward, so weleave the details to the reader. For orienta-

p

2

2  tion relative to previous results, we note from 114 that ':=' =1= cosh

C

2

satis es the di erential equation 116 , and the equation 118 was already encountered

in 110 .

6.3 The lawofS

2

The following prop osition is a companion of Prop osition 11 for S instead of C .

2 2

Prop osition 12 For a non-negative random variable X with Laplacetransform ' :=

X

E [expX ] and E X =2=3, the fol lowing conditions areequivalent:

p p

d

2

i X = S , that is ' = 2= sinh 2  .

2 X 32

ii if X ;X ;::: are independent random variables with the same distribution as X ,and

1 2

M >M > ::: are the points of a Poisson point process on 0; 1 with intensity measure

1 2

2

2m 1 mdm,then

X

d

2

= X ; 119 M X

i

i

i

iii the function ' = ' solves the fol lowing di erential equation for >0:

X

Z

1

0

' 

0 2

=2 1 m' m dm; 120

'

0

 

iv if X; X and H are independent, P X 2 dx=xP X 2 dx=E X  and

1=2

P H 2 dh=h 1dh 0

then

d

 

= X + HX ; 122 X

q

1

2

v the function   := = '   satis es the di erential equation

X

2

00 0 2

    = 1 on 0; 1: 123

Pro of.

P

2

i = ii. By a simple computation with L evy measures, the nth cumulantof M X

i

i

i

P

d

2 n

M X and X is E X =n2n 1. Compare with 98 , to see that if X = S then

i 2

i

i

have the same cumulants, hence also the same distribution.

ii = i. By consideration of L evy measures as in the previous argument, this is the

same implication as = in 104 whose pro of wehave deferred until now. This is easy

if all moments of X are assumed nite, by consideration of cumulants. But without

moment assumptions we can only complete the argumentby passing via conditions iii

to v, whichwenow pro ceed to do.

ii = iii. The identity 119 implies that the lawofX is in nitely divisible, with

probability density f and L evy density  which are related as follows. From 119 , we

deduce that the Laplace transform

Z

1

x

1 e xdx ':=E [expX ] = exp

0

satis es

Z Z

1 1

2

2

y m

1 mdm ' = exp dy f y  1 e 

2

m

0 0 33

2

Using Fubini's theorem, and making the change of variables y = x=m , this yields

Z

1

dm

2

x=2 1 mf x=m ; 124

4

m

0

and 120 follows using

Z

1

0

' 

x

= xe xdx:

'

0

iii  iv. Rewrite iii as

Z

1

3 3

0 1=2 0

' =' dhh 1 ' h 125

2 2

0

But since

3 3



0 X X

' = E Xe =E e 

2 2

formula 125 is the Laplace transform equivalent of iv.

iii  v. A simple argument using integration by parts shows that iii holds i

' = ' solves the following di erential equation for >0:

X

Z



0

p

'  1 dx

1 'x: 126  =

3=2

' 2 x

0

0

Straightforward but tedious computations showthat' with '0 = 1;' 0 = 2=3

0

solves 126 if and only if  satis es 123 with  0 = 0; 0 = 1.

v  i. According to Kamke [36, 6.110], the solutions  of 123 are determined by

C  x = sinhC x + C 

1 1 2

0

for two complex constants C .Since>0with 0+ = 0; 0 = 1, it follows that

i

 x = sinh x. 2

p

2

Remarks. The fact that ':=2= sinh 2 is a solution of the di erential equation

126 app ears in Yor [70, x11.7,Cor. 11.4.1], with a di erent probabilistic interpretation.

Comparison of formula 83 with formula 11.38 of [70] shows that if W is a random

variable with P W 2 dt=3t dt, where  dtistheL evy measure of the pro cess S

S S

d

with S = T R , then W has the same distribution as the total time sp entbelow level

1 1 3

1by a 5-dimensional Bessel pro cess started at 1. This is reminiscent of the Ciesielski-

Taylor identities relating the distribution of functionals of Bessel pro cesses of di erent

dimensions [7, 17 ]. 34

As another remark, note that by iteration of 122 followed by an easy passage to the

limit,

d



X = X + H X +H H X +  +H H  H X +  127

1 1 1 2 2 1 2 n n

where X; X ;X ;::: and H ;H ;::: are indep endent, with the X distributed like X and

1 2 1 2 i

the H distributed like H . This is reminiscent of constructions considered in [63] and [51].

i

d

 

Regarding the sto chastic equation X = X + HX considered here, given a distribution

of H one may ask whether there exists such a distribution of X . It is easily shown that

if H is uniform on [0; 1], then X must have an exp onential distribution. See Pakes [45 ]

for a closely related result.

6.4 The laws of T and T

1 2

A formula for the densityof T was given already in 113 . Either from 113 and the

1

1

partial fraction expansion of tanh , or from 114 , we nd that the Mellin transform

of T involves the zeta function:

1

Z

1

s+1

2

s s+1 s+1

1

s +1 2s +2:  E [T ]= x  dx=4 1 : 128

C

1

2

2s+2



0

2 2

We deduce from 3 , 1 and tanh =1 1= cosh that the distributions of T and C

2 2

are related by the formula

1

P T 2 dx=dx = P C >x x> 0: 129

2 2

2

In terms of renewal theory [24 , p. 370], if interarrival times are distributed like C , the

2

limit distribution of the residual waiting time is the lawofT .Formula 129 allows the

2

Mellin transform of T to b e derived from the Mellin transform of C . The result app ears

2 2



in Table 1. By insp ection of the Mellin transforms of T and T ,we see that if T has

1 2

1



the size-biased distribution of T ,and U is a random variable indep endentof T with

1

1=3

1

0  U  1and P U  u= u=3 for 0  u<1, then

1=3 1=3

d



U T = T : 130

2

1=3

1

By consideration of a corresp onding di erential equation for the Laplace transform, as

in Prop ositions 11 and 12, we see that the distribution of T on 0; 1isuniquelychar-

1

acterized by this prop erty 130 with T the sum of two indep endent random variables

2

with the same distribution as T and E T =2=3.

1 1 35

^ ^

6.5 The laws of S and S

1 2

^ ^

We see from the densityof S displayed in Table 6 that  S has the logistic distribution

1 1

x 1

^

P  S >x = 1 + e  x 2 R;

1

whichhasfoundnumerous applications [39 ]. Aldous [2] relates this b oth distribution and

^

that of  S to asymptotic distributions in the random assignment problem. According

2

to [2, Theorem 2 and Lemma 6] the function

x x

e e 1+x

^

hx:= P  S >x= x>0

2

x 2

1 e 

is the probability densityon0; 1 of the limit distribution as n !1of n" ,

1; 1

n

where " ;i;j =1; 2;::: is an array of indep endentrandomvariables with the standard

i;j

x

exp onential distribution P " >x=e for x  0, and  is the p ermutation which

i;j n

P

n

" over all p ermutations  of f1;:::;ng. minimizes

i; i

i=1

For p>0we can compute

Z

1

p1 p

^

2 px hxdx = E j S j

2

0

^

using 27 , 21, and the Mellin transform of S given in Table 1. We deduce that the

2

density hxischaracterized by the remarkably simple Mellin transform

Z

1

p1

x hxdx =p 1p p 

0 131

0

where the right side is interpreted bycontinuityat p = 1. Aldous [2, Lemma 6] gave the

sp ecial cases of this formula for p = 1 and p =2.

7 Brownian Interpretations

For a sto chastic pro cess X =X ;t  0 let

t

H X :=infft : X = ag

a t

denote the hitting time of a by X . Wenow abbreviate H := H j jwhere is a

1 1

standard one-dimensional Brownian motion with = 0, and set

0

G := sup ft< H : =0g; 132

1 1 t 36

so G is the time of the last zero of b efore time H .Itiswell known [17] that

1 1

d

H = C 133

1 1

and

d

= S H G

1 1 1

where G is indep endentof H G by a last exit decomp osition. Hence in view of 86

1 1 1

d

G = T : 134

1 1

d

This Brownian interpretation of the decomp osition C = T + S , where T is indep endent

1 1 1 1

of S ,was discovered byKnight [38].

1

7.1 Brownian lo cal times and squares of Bessel pro cesses

x

Let L ;t  0;x 2 R b e the pro cess of Brownian lo cal times de ned by the o ccupation

t

density formula

Z Z

t 1

x

f  ds = f xL dx 135

s

t

0 1

for all non-negative Borel functions f , and almost sure jointcontinuityin t and x. See [53 ,

Ch. VI] for background, and pro of of the existence of Brownian lo cal times. According

to results of Ray [52], Knight [37] and Williams [66, 67 , 68 ], there are the identities in

distribution

d

a 2

L ; 0  a  1 =R ; 0  a  1

H 2;1a

1

and

d

a a 2

L L ; 0  a  1 =r ; 0  a  1

H G 2;a

1 1

2 2

;t  0 is the square of a  -dimensional := R where for  =1; 2;::: the pro cess R

;t 

Bessel process BES ,

 !

1=2



X

2

R :=

;t

i;t

i=1

where  ;t  0 for i =1; 2;::: is a sequence of indep endent one-dimensional Brownian

i;t

motions, and r := r ; 0  u  1 is the  -dimensional Bessel bridge de ned by

 ;u

2

conditioning R ; 0  u  1onR = 0. Put another way, r is the sum of squares of 

;u ;1



indep endent copies of the standard Brownian bridge. As observed in [59 ] and [47], the

2 2

can b e extended to all p ositivereal in sucha way and r de nition of the pro cesses R

  37

2 2 2

. It then follows is distributed as the sum of R and an indep endentcopyof R that R

 +

from results of Cameron-Martin [13 ] and Montroll [43 , 3.41] for  = 1 that there are

the identities in law

Z

1

d

2

 R du;   0 =C ;  0 136

=2

;u

0

and

Z

1

d

2

 r du;   0 =S ;  0: 137

=2

;u

0

In particular, for  =2,we see from these identities and the Ray-Knight theorems that

Z Z

1 1

d d d d

2 2

R du = H = C while r du = H G = S :

1 1 1 1 1

2;u 2;u

0 0

More generally, for an arbitrary p ositive measure  on [0; 1], the Laplace transforms

R R

1 1

2 2

du can b e characterized in terms of the solutions of an duand r of R

;u ;u

0 0

appropriate Sturm-Liouville equation. See [13, 43 , 48 ] or [53 , Ch. XI] for details and

further developments.

7.2 Brownian excursions

Consider now the excursions away from 0 of the re ecting Brownian motion j j :=

j j;t  0. For t>0 let Gt := supfs  t : =0g and D t := inf fs> t: =0g.

t s s

So Gt is the time of the last zero of b efore time t,and D t is the time of the rst

zero of after time t. The path fragment

 ; 0  v  D t Gt

Gt+v

is then called the Brownian excursion stradd ling time t. The works of L evy[41] and

Williams [66] show that the distribution of the Brownian excursion straddling time t is

determined by the identity

j j

Gt+uD tGt d

; 0  u  1 =r ; 0  u  1 138

3;u

1=2

D t Gt

and the normalized excursion on the left side of 138 is indep endentof D t Gt. See

also [15 , 53 ] for further treatment of Brownian excursions, and [26] for a recent study of

R

1

the moments of r ds.

3;s

0 38

Recall from around 86 that H is the rst hitting time of 1 by the re ecting Brownian

1

d d

motion j j,that G := GH , and that H = C and H G = S . According to

1 1 1 1 1 1 1

Williams [66 , 67 ], the random variable

U := max j j

t

0tG

1

has uniform distribution on [0; 1], and if T denotes the almost surely unique time at



which j j attains this maximum value over [0;G ], then the pro cesses

1

1 2 2 1 2 2

U j tU j; 0  t  T =U andU j GH  tU j; 0  t  G T =U 

 1 1 

are two indep endent copies of j j; 0  t  H , indep endentofU .Thus

t 1

2 2

G = U G =U 

1 1

2 2

where U is indep endentofG =U and G=U is explicitly represented as the sum of two

1

d

indep endent copies of H = C ,so

1 1

G T G T

d

1  1 

= + = C : 139

2

2 2 2

U U U

This provides a Brownian derivation of the identity 114 . Let H R  b e the rst hitting

x 3

time of x by R . Itisknown [17, 38 ] that

3

d d

H G = H R  = S : 140

1 1 1 3 1

The rst identityoflaws in 140 is the identity in distribution of lifetimes implied by

the following identity in distribution of pro cesses, due to Williams [66, 67 ]:

d

j j; 0  t  H G  =R ; 0  t  H R : 141

G +t 1 1 3;t 1 3

1

Let D := D H  b e the time of the rst return of to 0 after the time H when j j rst

1 1 1

reaches 1. According to the strong Markov prop erty of Brownian motion, the pro cess

j j; 0  u  D H  is just a Brownian motion started at 1 and stopp ed when

H +u 1 1

1

it rst reaches 0, and this pro cess is indep endent of the pro cess in 141 . Thus the

excursion of j j straddling time H is decomp osed into two indep endent fragments, a

1

copyofR run until it rst reaches 1, followed by an unconditioned Brownian motion

3

for the return trip back to 0. Let

M := max j j:

t

H tD

1 1 39

As shown by Williams [68] see also [54], the identity in distribution 141 together

with a time reversal argumentshows that conditionally given M = x, the excursion of

j j straddling H decomp oses at its maximum into two indep endent copies of R ; 0 

1 3;t

t  H R  put backtoback. Together with Brownian scaling and 140 , this implies

x 3

the identity



d

1 2

M; D G  = V ;V S 142

1 1 2

where V is indep endentof S with uniform distribution on [0; 1]. In particular,

2

D G

d

1 1

= S 143

2

2

M

which should b e compared with 139 .

7.3 Poisson interpretations

By exploiting the strong Markov prop ertyofBrownian motion at times when it returns

to its starting p oint, L evy[41], It^o[34] and Williams [68] haveshown how to decomp ose

the Brownian path into a Poisson p oint pro cess of excursions, and to reconstruct the

original path from these excursions. See Rogers-Williams [55], Revuz-Yor [53], Blumen-

thal [11], Watanab e [64], Ikeda-Watanab e [33] for various accounts of this theory and its

generalizations to other Markov pro cesses. Here wegive some applications of the Poisson

pro cesses derived from the heights and lengths of Brownian excursions.

For a standard one-dimensional Brownian motion, with past maximum pro cess

:= max let

0st s

t

R := t  0:

t t

t

According to a fundamental result of L evy[41], [53 , Ch VI, Theorem 2.3], there is the

identityinlaw

d

R ;t  0 =j j;t  0

t t

so the structure of excursions of R and j j away from zero is identical. As explained in

Williams [68], L evy's device of considering R instead of j j simpli es the construction

of various Poisson p oint pro cesses b ecause the pro cess  ;t  0 serves as a local time

t

process at 0 for R ;t  0. Consider now the excursions of R away from 0, corresp onding

t

to excursions of b elow its continuously increasing past maximum pro cess . Each

excursion of R away from 0 is asso ciated with some unique level `, the constantvalue of

for the duration of the excursion, which equals H H where wenow abbreviate H

`+ ` `

for H   rather than H j j.

` ` 40

3

Prop osition 13 Biane-Yor[9] The random counting measure N on 0; 1 de nedby

X

N := 1`; H H ;` min  2; 144

`+ ` u

H uH

` `+

`

where the sum is over the random countable set of ` with H H > 0, is Poisson with

`+ `

intensity measure d`dv  dm where v; m denotes a generic value of

H H ;` min 

`+ ` u

H uH

` `+

and

p

dm dv

2

p

P  v r 2 dm= P m S 2 dv  145 dv  dm=

3;1 2

2

3=2

m

2v

where r := max r is the maximum of a 3-dimensional Bessel bridge or standard

3;1 0u1 3;u

Brownian excursion and S is the sum of two independent random variables with the

2

same distribution as S and T R .

1 1 3

See also [49 ] for further discussion of the agreement formula 145 and some general-

izations.

3

Consider now the counting measure N on0; 1 de ned exactly like N in

3

144 , but with the underling Brownian motion replaced by a BES3 pro cess R .

3

That is

X

N := 1`; H H ;` min R  2 ;

3 3;`+ 3;` 3;u

H uT

3;` 3;`+

`

where H := T R .

` ` 3

From Prop osition 13 and the McKean-Williams description of BES3 as a condi-

tioned Brownian motion, we deduce the following:

3

Corollary 14 The point process N on 0; 1 is Poisson with intensity measure

3

dh1 dv  dm

m

2

where  is the intensity measureon 0; 1 de ned in 145.

Pro of. The Poisson character of the p oint pro cess follows easily from the strong Markov

prop ertyof R .To identify the intensity measure, it is enough to consider its restriction

3

2 2

to a; b  0; 1 for arbitrary 0

3

is generated by R after time H and b efore time H , during which random interval

3 3;a 3;b

R evolves like conditioned to hit b b efore 0. The Poisson measure N derived from

3 41

in Prop osition 13 is then conditioned to havenopoints h; v ; msuchthath 2 a; b

and m  h. The conditioned Poisson measure is just Poisson with a restricted intensity

measure, and the conclusion follows. 2

Since

X

H = H H 

3;1 3;`+ 3;`

0<`<1

where the sum is over the countable random set of ` with 0 <`<1andH H > 0,

3;`+ 3;`

and these H H > 0 are the p oints of a Poisson p oint pro cess, whose intensity

3;`+ 3;`

is the L evy measure  dt of the common distribution of H and S . Hence for each

S 3;1 1

non-negative Borel function g

" 

Z

1

X

E g H H  = g t dt 146

3;`+ 3;` S

0

0<`<1

s=2

In particular, for g x=x we deduce from Table 4 the following probabilistic inter-

pretation of Riemann's  function:

" 

s=2

X

2 2 s

s=2

E H H  = <s > 1: 147

3;`+ 3;`

 ss 1

0<`<1

See also [35, x4.10] and [12 ] for more general discussions of the Poisson character of the

jumps of the rst passage pro cess of a one-dimensional di usion.

~ ~

If M > M > ::: are the ranked values of the maxima of the excursions of R R

1 2 3 3

~

awayfrom0uptotimeH R , and V is the length of the excursion whose maximum

1 3 i

~

is M , then wehave the representation

i

1 1

X X

2

~ ~

H R = V = S 148 M

1 3 i 2;i

i

i=1 i=1

~

where the S are indep endent copies of S , with the S indep endentalsoofthe M .

2;i 2 2;i i

~

Now the M are the ranked p oints of a Poisson pro cess on 0; 1 with intensity measure

i

2

~

m 1 mdm. If the last sum in 148 is considered for M that are instead the ranked

i

2

p oints of a Poisson pro cess on 0; 1 with intensity measure 2m 1 mdm, the result

is distributed as the sum of two indep endent copies of H R , that is like S .Together

1 3 2

with the fact that E S =2E [H R ] = 2=3, the ab ove argumentshows that X = S

2 1 3 2

satis es condition ii of Prop osition 12. Indeed, it was by this argumentthatwe rst

discovered this prop ertyofthelawofS .

2 42

7.4 A path transformation

There are several known constructions of the path of a BES 3 pro cess, or segments

0

thereof, from the path of a one-dimensional Brownian motion . It will b e clear to readers

familiar with It^o's excursion theory that the previous discussion can b e lifted from the

description of the p oint pro cesses of heights and lengths of excursions of R below its past

3

maximum pro cess to a similar description of a corresp onding p oint pro cess of excursions

de ned as a random counting measure on 0; 1  where is a suitable path space.

Essentially, the conclusion is that the p oint pro cess of excursions of R below its past

3

maximum pro cess is identical in lawtothepoint pro cess obtained from the excursions of

b elow its past maximum pro cess by deletion of every excursion whose height exceeds

its starting level, meaning that the path of hits zero during that excursion interval.

Since the path of R can b e reconstructed from its pro cess of excursions b elow its past

3

maximum pro cess, we obtain the following result, found indep endently byJonWarren

unpublished.

Theorem 15 For a standard one-dimensional Brownian motion ,let := max

0ut u

t

and let R := .Let G ;D  be the excursion interval of R away from 0 that strad-

t t s s

t

d les time s.Let M := max R be the maximum of R over this excursion interval,

s G uD u

s s

and

Z

t

U := 1M   ds and := inf ft : U >ug

t s u t

s

0

Then the process  ;u  0 is a BES 3.

0

u

Note that the continuous increasing pro cess U in this result is anticipating. See also

Section 4 of Pitman [46 ] for closely related results.

Acknowledgment

We thank Ira Gessel for directing us to the treatment of Euler and tangentnumbers in

[19 ].

References

[1] D.J. Aldous. The continuum random tree I I: an overview. In M.T. Barlow and N.H.

Bingham, editors, Stochastic Analysis, pages 23{70. Cambridge University Press,

1991. 43

[2] D.J. Aldous. The 2 limit in the random assignment problem. Unpublished;

available via homepage http://www.stat.b erkeley.edu/users/aldous, 2000.

[3] G. E. Andrews, R. Askey,andR.Roy. Special Functions,volume 71 of Encyclopedia

of Math. and Appl. Cambridge Univ. Press, Cambridge, 1999.

[4] R. Arratia and L. Goldstein. Size biasing: when is the increment indep endent.

Preprint, 1998.

[5] O. Barndor -Nielsen, J. Kent, and M. Srensen. Normal variance- mixtures

and z distributions. International Statistical Review, 50:145{159, 1982.

[6] J. Bertoin. L evy processes. Cambridge University Press, Cambridge, 1996.

[7] P. Biane. Comparaison entre temps d'atteinte et temps de s ejour de certaines dif-

fusions r eelles. In S eminairedeProbabilit es XIX, pages 291{296. Springer, 1985.

Lecture Notes in Math. 1123.

[8] P. Biane, J. Pitman, and M. Yor. Probabilitylaws related to the Jacobi theta and

Riemann zeta functions, and Brownian excursions. Technical Rep ort 569, Dept.

Statistics, U.C. Berkeley, 1999. To app ear in Bul l. A.M.S.

[9] P. Biane and M. Yor. Valeurs principales asso ci ees aux temps lo caux Browniens.

Bul l. Sci. Math. 2, 111:23{101, 1987.

[10] P. Billingsley. Probability and Measure.Wiley, New York, 1995. 3rd ed.

[11] R. M. Blumenthal. Excursions of Markov processes. Birkhauser, 1992.

[12] A. N. Boro din and P. Salminen. Handbook of Brownian motion { facts and formulae.

Birkhauser, 1996.

[13] R. H. Cameron and W. T. Martin. Transformation of Wiener integrals under a

general class of linear transformations. Trans. Amer. Math. Soc., 58:184{219, 1945.

[14] L. Carlitz. Bernoulli and Euler numb ers and orthogonal p olynomials. Duke Math.

J, 26:1{15, 1959.

[15] K. L. Chung. Excursions in Brownian motion. Arkiv fur  Matematik, 14:155{177,

1976.

[16] K. L. Chung. A cluster of great formulas. Acta Math. Hungar., 39:65 { 67, 1982. 44

[17] Z. Ciesielski and S. J. Taylor. First passage times and so journ density for brownian

motion in space and the exact hausdor measure of the sample path. Trans. Amer.

Math. Soc., 103:434{450, 1962.

[18] A. Comtet, C. Monthus, and M. Yor. Exp onential functionals of Brownian motion

and disordered systems. J. Appl. Probab., 352:255{271 , 1998.

[19] L. Comtet. Advanced Combinatorics. D. Reidel Pub. Co., Boston, 1974. translated

from French.

[20] A. Di Bucchianico. Probabilistic and analytical aspects of the umbral calculus. Sticht-

ing Mathematisch Centrum, Centrum vo or Wiskunde en Informatica, Amsterdam,

1997.

[21] K. Dilcher, L. Skula, and I. Sh. Slavutski . Bernoul li numbers,volume 87 of Queen's

Papers in Pure and Applied Mathematics. Queen's University, Kingston, ON, 1991.

Bibliography 1713{1990.

[22] A. Erd elyi et al. Higher Transcendental Functions,volume I of Bateman Manuscript

Project.McGraw-Hill, New York, 1953.

[23] A. Erd elyi et al. Tables of Integral Transforms,volume I of Bateman Manuscript

Project.McGraw-Hill, New York, 1954.

[24] W. Feller. AnIntroduction to and its Applications, Vol 2, 2nd

ed. Wiley,NewYork, 1971.

[25] P. J. Fitzsimmons and R. K. Geto or. On the distribution of the Hilb ert transform

of the lo cal time of a symmetric L evy pro cess. Ann. Probab., 203:1484{1497 , 1992.

[26] Ph. Fla jolet and G. Louchard. Analytic variations on the Airy distribution. Preprint

available via www.ulb.ac.be/di/mcs/louchard,2000.

[27] B. Gaveau. Princip e de moindre action, propagation de la chaleur et estim ees sous

elliptiques sur certains group es nilp otents. Acta Math., 1391-2:95{1 53, 1977.

[28] I.S. Gradshteyn and I.M. Ryzhik. Table of Integrals, Series and Products corrected

and enlargededition. Academic Press, New York, 1980.

[29] B. Grigelionis. Pro cesses of Meixner typ e. Liet. Matem. Rink., 39:40{51, 1999. 45

[30] B. Grigelionis. Generalized z -distributions and related sto chastic pro cesses. Tech-

nical Rep ort 2000-22, Institute of Mathematics and Informatics, Vilnius, Lithuania,

2000.

[31] A. Hald. The early history of the cumulants and the Gram-Charlier series. Inter-

national Statistical Review, 68:137{153, 2000.

[32] W. L. Harkness and M. L. Harkness. Generalized hyp erb olic secant distributions.

J. Amer. Statist. Assoc., 63:329{337 , 1968.

[33] N. Ikeda and S. Watanab e. Stochastic Di erential Equations and Di usion Pro-

cesses. North Holland/Ko dansha, 1989. 2nd edition.

[34] K. It^o. Poisson p oint pro cesses attached to Markov pro cesses. In Proc. 6th Berk.

Symp. Math. Stat. Prob.,volume 3, pages 225{240, 1971.

[35] K. It^o and H. P. McKean. Di usion Processes and their Sample Paths. Springer,

1965.

[36] E. Kamke. Di erentialgleichungen . Losungsmethoden und Losungen. Band I.

Gewohnliche Di erentialgleichunge n. Chelsea Publishing Co., New York, 1959. 3rd

edition.

[37] F. B. Knight. Random walks and a so journ density pro cess of Brownian motion.

Trans. Amer. Math. Soc., 107:36{56, 1963.

[38] F. B. Knight. Brownian lo cal times and tab o o pro cesses. Trans. Amer. Math. Soc.,

143:173{185, 1969.

[39] S. Kotz and N.L. Johnson, editors. Logistic Distribution,volume 5 of Encyclopedia

of Statistical Sciences, pages 123{128. Wiley,1985.

[40] P.L evy. Wiener's random function and other Laplacian random functions. In Second

Symposium of Berkeley. Probability and Statistics, pages 171{186. U.C. Press, 1951.

[41] P.L evy. Processus Stochastiques et Mouvement Brownien. Gauthier-Villars, Paris,

1965.  rst ed. 1948.

[42] R. Magiera. Sequential estimation for the generalized exp onential hyp erb olic secant

pro cess. Statistics, 192:271{281 , 1988. 46

[43] E. W. Montroll. Markovchains and quantum theory. Comm. on Pure and Appl.

Math., V:415{453, 1952.

[44] C. N. Morris. Natural exp onential families with quadratic variance functions. Ann.

Statist., 101:65{80, 1982.

[45]A.G.Pakes. On characterizations through mixed sums. Austral. J. Statist.,

342:323{339, 1992.

[46] J. Pitman. Cyclically stationary Brownian lo cal time pro cesses. Probab. Th. Rel.

Fields, 106:299{329, 1996.

[47] J. Pitman and M. Yor. A decomp osition of Bessel bridges. Z. Wahrsch. Verw.

Gebiete, 59:425{457, 1982.

[48] J. Pitman and M. Yor. Arcsine laws and interval partitions derived from a stable

sub ordinator. Proc. London Math. Soc. 3, 65:326{356, 1992.

[49] J. Pitman and M. Yor. Decomp osition at the maximum for excursions and bridges

of one-dimensional di usions. In N. Ikeda, S. Watanab e, M. Fukushima, and

H. Kunita, editors, It^o's Stochastic Calculus and Probability Theory, pages 293{310.

Springer-Verlag, 1996.

[50] J. Pitman and M. Yor. Quelques identit es en loi p our les pro cessus de Bessel. In

Hommage a P.A. Meyer et J. Neveu, Ast erisque, pages 249{276. So c. Math. de

France, 1996.

[51] J. Pitman and M. Yor. The two-parameter Poisson- derived

from a stable sub ordinator. Ann. Probab., 25:855{900, 1997.

[52] D. B. Ray. So journ times of a di usion pro cess. Il l. J. Math., 7:615{630, 1963.

[53] D. Revuz and M. Yor. Continuous martingales and Brownian motion. Springer,

Berlin-Heidelb erg, 1999. 3rd edition.

[54] L. C. G. Rogers. Williams' characterization of the Brownian excursion law: pro of

and applications. In S eminairedeProbabilit es XV, pages 227{250. Springer-Verlag,

1981. Lecture Notes in Math. 850.

[55] L. C. G. Rogers and D. Williams. Di usions, Markov Processes and Martingales,

Vol. II: It^o Calculus. Wiley,1987. 47

[56] K. Sato. L evy processes and in nitely divisible distributions.Cambridge University

Press, Cambridge, 1999. Translated from the 1990 Japanese original, Revised by

the author.

[57] W. Schoutens. Stochastic Processes and Orthogonal Polynomials,volume 146 of

Lecture Notes in Statistics. Springer, New York, 2000.

[58] W. Schoutens and J. L. Teugels. L evy pro cesses, p olynomials and martingales.

Comm. Statist. Stochastic Models, 141-2:335{3 49, 1998. Sp ecial issue in honor of

Marcel F. Neuts.

[59] T. Shiga and S. Watanab e. Bessel di usions as a one-parameter family of di usion

pro cesses. Z. Wahrsch. Verw. Gebiete, 27:37{46, 1973.

[60] L. Smith and P. Diaconis. Honest Bernoulli excursions. J. Appl. Probab., 25:464 {

477, 1988.

[61] K. Urbanik. Moments of sums of indep endentrandomvariables. In S. Cambanis,

J. K. Ghosh, R. L. Karandikar, and P. K. Sen, editors, Stochastic processes. A

festschrift in honour of Gopinath Kal lianpur, pages 321{328. Springer, New York,

1993.

[62] K. van Harn and F. W. Steutel. In nite divisibilityandthewaiting-time paradox.

Comm. Statist. Stochastic Models, 113:527{540 , 1995.

[63] W. Vervaat. On a sto chastic di erence equation and a representation of non-negative

in nitely divisible random variables. Adv. Appl. Prob., 11:750 { 783, 1979.

[64] S. Watanab e. Poisson p oint pro cess of Brownian excursions and its applications to

di usion pro cesses. In Probability Proc. Sympos. Pure Math., Vol. XXXI, Univ.

Il linois, Urbana, Il l., 1976, pages 153{164. Amer. Math. So c., Providence, R. I.,

1977.

[65] A. Weil. Prehistory of the zeta-function. In Number theory, trace formulas and

discrete groups Oslo, 1987, pages 1{9. Academic Press, Boston, MA, 1989.

[66] D. Williams. Decomp osing the Brownian path. Bul l. Amer. Math. Soc., 76:871{873,

1970.

[67] D. Williams. Path decomp osition and continuity of lo cal time for one dimensional

di usions I. Proc. London Math. Soc. 3, 28:738{768, 1974. 48

[68] D. Williams. Di usions, Markov Processes, and Martingales, Vol. 1: Foundations.

Wiley,Chichester, New York, 1979.

[69] D. Williams. Brownian motion and the Riemann zeta-function. In G. R. Grimmett

and D. J. A. Welsh, editors, Disorder in Physical Systems, pages 361{372. Clarendon

Press, Oxford, 1990.

[70] M. Yor. Some Aspects of Brownian Motion, Part II: Some Recent Martingale Prob-

lems. Lectures in Math., ETH Zuric  h. Birkhauser,  1997. 49